docs: add the lost part to openwhisk doc (#7658) master
authormonkeyDluffy6017 <375636559@qq.com>
Sun, 14 Aug 2022 11:59:10 +0000 (19:59 +0800)
committerGitHub <noreply@github.com>
Sun, 14 Aug 2022 11:59:10 +0000 (19:59 +0800)
265 files changed:
.asf.yaml
.github/actions/action-semantic-pull-request [new submodule]
.github/semantic.yml [deleted file]
.github/workflows/build.yml
.github/workflows/doc-lint.yml
.github/workflows/fuzzing-ci.yaml
.github/workflows/kubernetes-ci.yml
.github/workflows/license-checker.yml
.github/workflows/lint.yml
.github/workflows/semantic.yml [new file with mode: 0644]
.github/workflows/tars-ci.yml
.gitignore
.gitmodules
.licenserc.yaml
CHANGELOG.md
MAINTAIN.md
Makefile
README.md
Vision-and-Milestones.md [new file with mode: 0644]
apisix/admin/consumers.lua
apisix/admin/global_rules.lua
apisix/admin/init.lua
apisix/admin/plugin_config.lua
apisix/admin/proto.lua
apisix/admin/routes.lua
apisix/admin/services.lua
apisix/admin/ssl.lua
apisix/admin/stream_routes.lua
apisix/admin/upstreams.lua
apisix/admin/utils.lua
apisix/admin/v3_adapter.lua [new file with mode: 0644]
apisix/balancer.lua
apisix/cli/apisix.lua
apisix/cli/env.lua
apisix/cli/file.lua
apisix/cli/ngx_tpl.lua
apisix/cli/ops.lua
apisix/cli/schema.lua
apisix/cli/snippet.lua
apisix/conf_server.lua
apisix/constants.lua
apisix/control/v1.lua
apisix/core/config_etcd.lua
apisix/core/etcd.lua
apisix/core/table.lua
apisix/core/version.lua
apisix/discovery/kubernetes/informer_factory.lua
apisix/init.lua
apisix/plugin.lua
apisix/plugin_config.lua
apisix/plugins/clickhouse-logger.lua
apisix/plugins/grpc-transcode.lua
apisix/plugins/grpc-transcode/proto.lua
apisix/plugins/http-logger.lua
apisix/plugins/jwt-auth.lua
apisix/plugins/ldap-auth.lua
apisix/plugins/limit-count.lua
apisix/plugins/openid-connect.lua
apisix/plugins/prometheus/exporter.lua
apisix/plugins/proxy-rewrite.lua
apisix/plugins/redirect.lua
apisix/plugins/wolf-rbac.lua
apisix/schema_def.lua
apisix/ssl/router/radixtree_sni.lua
apisix/upstream.lua
bin/apisix
ci/linux_openresty_1_19_runner.sh [moved from ci/linux_openresty_1_17_runner.sh with 96% similarity]
ci/pod/docker-compose.plugin.yml
conf/config-default.yaml [changed mode: 0644->0755]
docs/assets/other/json/apisix-grafana-dashboard.json
docs/en/latest/admin-api.md
docs/en/latest/architecture-design/apisix.md
docs/en/latest/architecture-design/debug-mode.md [deleted file]
docs/en/latest/architecture-design/deployment-role.md
docs/en/latest/building-apisix.md
docs/en/latest/certificate.md
docs/en/latest/config.json
docs/en/latest/control-api.md
docs/en/latest/debug-mode.md [new file with mode: 0644]
docs/en/latest/discovery.md
docs/en/latest/discovery/consul_kv.md
docs/en/latest/discovery/kubernetes.md
docs/en/latest/discovery/nacos.md
docs/en/latest/getting-started.md
docs/en/latest/install-dependencies.md
docs/en/latest/installation-guide.md
docs/en/latest/internal/testing-framework.md
docs/en/latest/mtls.md
docs/en/latest/plugin-develop.md
docs/en/latest/plugins/api-breaker.md
docs/en/latest/plugins/batch-requests.md
docs/en/latest/plugins/clickhouse-logger.md
docs/en/latest/plugins/client-control.md
docs/en/latest/plugins/consumer-restriction.md
docs/en/latest/plugins/cors.md
docs/en/latest/plugins/csrf.md
docs/en/latest/plugins/grpc-transcode.md
docs/en/latest/plugins/http-logger.md
docs/en/latest/plugins/ip-restriction.md
docs/en/latest/plugins/jwt-auth.md
docs/en/latest/plugins/ldap-auth.md
docs/en/latest/plugins/limit-conn.md
docs/en/latest/plugins/limit-req.md
docs/en/latest/plugins/mqtt-proxy.md
docs/en/latest/plugins/node-status.md
docs/en/latest/plugins/openid-connect.md
docs/en/latest/plugins/openwhisk.md
docs/en/latest/plugins/prometheus.md
docs/en/latest/plugins/proxy-control.md
docs/en/latest/plugins/proxy-rewrite.md
docs/en/latest/plugins/public-api.md
docs/en/latest/plugins/redirect.md
docs/en/latest/plugins/referer-restriction.md
docs/en/latest/plugins/ua-restriction.md
docs/en/latest/plugins/uri-blocker.md
docs/en/latest/terminology/plugin-config.md [moved from docs/en/latest/architecture-design/plugin-config.md with 85% similarity]
docs/en/latest/terminology/plugin.md
docs/en/latest/terminology/route.md
docs/zh/latest/CHANGELOG.md
docs/zh/latest/README.md
docs/zh/latest/admin-api.md
docs/zh/latest/architecture-design/apisix.md
docs/zh/latest/architecture-design/plugin-config.md
docs/zh/latest/building-apisix.md
docs/zh/latest/certificate.md
docs/zh/latest/config.json
docs/zh/latest/control-api.md
docs/zh/latest/discovery.md
docs/zh/latest/discovery/kubernetes.md
docs/zh/latest/discovery/nacos.md
docs/zh/latest/getting-started.md
docs/zh/latest/install-dependencies.md
docs/zh/latest/installation-guide.md
docs/zh/latest/mtls.md
docs/zh/latest/plugins/api-breaker.md
docs/zh/latest/plugins/batch-requests.md
docs/zh/latest/plugins/clickhouse-logger.md
docs/zh/latest/plugins/client-control.md
docs/zh/latest/plugins/consumer-restriction.md
docs/zh/latest/plugins/cors.md
docs/zh/latest/plugins/csrf.md
docs/zh/latest/plugins/grpc-transcode.md
docs/zh/latest/plugins/http-logger.md
docs/zh/latest/plugins/ip-restriction.md
docs/zh/latest/plugins/jwt-auth.md
docs/zh/latest/plugins/ldap-auth.md
docs/zh/latest/plugins/limit-conn.md
docs/zh/latest/plugins/limit-req.md
docs/zh/latest/plugins/mqtt-proxy.md
docs/zh/latest/plugins/node-status.md
docs/zh/latest/plugins/openid-connect.md
docs/zh/latest/plugins/prometheus.md
docs/zh/latest/plugins/proxy-control.md
docs/zh/latest/plugins/public-api.md [new file with mode: 0644]
docs/zh/latest/plugins/redirect.md
docs/zh/latest/plugins/referer-restriction.md
docs/zh/latest/plugins/skywalking-logger.md
docs/zh/latest/plugins/tcp-logger.md
docs/zh/latest/plugins/ua-restriction.md
docs/zh/latest/plugins/uri-blocker.md
docs/zh/latest/terminology/plugin.md
docs/zh/latest/terminology/route.md
rockspec/apisix-2.15.0-0.rockspec [new file with mode: 0644]
rockspec/apisix-master-0.rockspec
t/APISIX.pm
t/admin/api.t
t/admin/consumers.t
t/admin/consumers2.t
t/admin/filter.t [new file with mode: 0644]
t/admin/global-rules.t
t/admin/global-rules2.t
t/admin/health-check.t
t/admin/plugin-configs.t
t/admin/plugin-metadata.t
t/admin/plugin-metadata2.t
t/admin/plugins.t
t/admin/proto.t
t/admin/protos.t [new file with mode: 0644]
t/admin/response_body_format.t [new file with mode: 0644]
t/admin/routes-array-nodes.t
t/admin/routes.t
t/admin/routes2.t
t/admin/routes3.t
t/admin/routes4.t
t/admin/services-array-nodes.t
t/admin/services-string-id.t
t/admin/services.t
t/admin/services2.t
t/admin/ssl.t
t/admin/ssl2.t
t/admin/ssl3.t
t/admin/ssls.t [new file with mode: 0644]
t/admin/stream-routes.t
t/admin/upstream-array-nodes.t
t/admin/upstream.t
t/admin/upstream2.t
t/admin/upstream3.t
t/admin/upstream4.t
t/certs/localhost_slapd_cert.pem [new file with mode: 0644]
t/certs/localhost_slapd_key.pem [new file with mode: 0644]
t/cli/test_deployment_control_plane.sh [new file with mode: 0755]
t/cli/test_deployment_data_plane.sh
t/cli/test_deployment_mtls.sh [new file with mode: 0755]
t/cli/test_etcd.sh
t/cli/test_makefile.sh [new file with mode: 0755]
t/cli/test_standalone.sh
t/config-center-yaml/plugin-configs.t
t/config-center-yaml/ssl.t
t/control/plugin-metadata.t [new file with mode: 0644]
t/core/config_etcd.t
t/core/etcd-auth-fail.t
t/core/etcd.t
t/deployment/conf_server.t
t/deployment/conf_server2.t [new file with mode: 0644]
t/deployment/mtls.t [new file with mode: 0644]
t/error_page/error_page.t
t/fuzzing/http_upstream.py [new file with mode: 0755]
t/fuzzing/upstream/nginx.conf
t/lib/server.lua
t/node/chash-hashon.t
t/node/client-mtls-openresty.t [moved from t/node/client-mtls-openresty-1-19.t with 94% similarity]
t/node/client-mtls.t
t/node/consumer-plugin2.t
t/node/grpc-proxy-unary.t
t/node/plugin-configs.t
t/node/upstream-domain.t
t/node/upstream-ipv6.t
t/node/upstream-keepalive-pool.t
t/node/upstream-mtls.t
t/node/upstream-websocket.t
t/node/upstream.t
t/plugin/clickhouse-logger.t
t/plugin/ext-plugin/sanity2.t [moved from t/plugin/ext-plugin/sanity-openresty-1-19.t with 86% similarity]
t/plugin/grpc-transcode.t
t/plugin/grpc-transcode2.t
t/plugin/grpc-transcode3.t
t/plugin/http-logger2.t
t/plugin/jwt-auth.t
t/plugin/jwt-auth2.t
t/plugin/key-auth.t
t/plugin/ldap-auth.t
t/plugin/limit-count2.t
t/plugin/openid-connect.t
t/plugin/openwhisk.t
t/plugin/plugin.t
t/plugin/prometheus2.t
t/plugin/prometheus4.t [new file with mode: 0644]
t/plugin/proxy-rewrite.t
t/plugin/proxy-rewrite3.t
t/plugin/redirect.t
t/plugin/redirect2.t
t/plugin/rocketmq-logger2.t
t/plugin/traffic-split2.t
t/plugin/wolf-rbac.t
t/router/multi-ssl-certs.t
t/router/radixtree-sni.t
t/router/radixtree-sni2.t
t/router/radixtree-uri-with-parameter.t
t/stream-node/mtls.t
t/stream-node/sanity.t
t/stream-node/sni.t
t/stream-node/tls.t
t/stream-node/upstream-tls.t
utils/create-ssl.py
utils/linux-install-openresty.sh

index 1ec6a2f34d4022fff68a6b434a5293c64b4c3953..c5327b247dd440acd229f615e366da21c088e73f 100644 (file)
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -27,6 +27,7 @@ github:
       - apigateway
       - microservices
       - api
+      - apis
       - loadbalancing
       - reverse-proxy
       - api-management
@@ -36,6 +37,9 @@ github:
       - devops
       - kubernetes
       - docker
+      - kubernetes-ingress
+      - kubernetes-ingress-controller
+      - service-mesh
 
     enabled_merge_buttons:
       squash:  true
@@ -50,6 +54,10 @@ github:
           dismiss_stale_reviews: true
           require_code_owner_reviews: true
           required_approving_review_count: 2
+      release/2.15:
+        required_pull_request_reviews:
+          require_code_owner_reviews: true
+          required_approving_review_count: 2
       release/2.14:
         required_pull_request_reviews:
           require_code_owner_reviews: true
diff --git a/.github/actions/action-semantic-pull-request b/.github/actions/action-semantic-pull-request
new file mode 160000 (submodule)
index 0000000..348e2e6
--- /dev/null
@@ -0,0 +1 @@
+Subproject commit 348e2e6922130ee27d6d6a0a3b284890776d1f80
diff --git a/.github/semantic.yml b/.github/semantic.yml
deleted file mode 100644 (file)
index 5fe591e..0000000
+++ /dev/null
@@ -1,15 +0,0 @@
-titleOnly: true
-allowRevertCommits: true
-types:
-  - feat
-  - fix
-  - docs
-  - style
-  - refactor
-  - perf
-  - test
-  - build
-  - ci
-  - chore
-  - revert
-  - change
index 21d185cbe4550fd3fa0b8956b03830e22ad6850a..28706739024837d366d896dd3dbdcdcc5058ffdf 100644 (file)
@@ -28,7 +28,7 @@ jobs:
           - ubuntu-18.04
         os_name:
           - linux_openresty
-          - linux_openresty_1_17
+          - linux_openresty_1_19
         test_dir:
           - t/plugin
           - t/admin t/cli t/config-center-yaml t/control t/core t/debug t/deployment t/discovery t/error_page t/misc
index d6b64921b0da5f818a440c512778775e0e78aeef..624a03e08dfffe1fc4e6da3685dd0328b6f1674f 100644 (file)
@@ -18,7 +18,7 @@ jobs:
     steps:
       - uses: actions/checkout@v3
       - name: 🚀 Use Node.js
-        uses: actions/setup-node@v3.3.0
+        uses: actions/setup-node@v3.4.1
         with:
           node-version: '12.x'
       - run: npm install -g markdownlint-cli@0.25.0
index 426ebcc3768c499761a9a6514ec817906c283b95..60dc602c561a116c9bc79bf02b56718d5d56b4cb 100644 (file)
@@ -63,21 +63,15 @@ jobs:
 
     - name: install boofuzz
       run: |
+        # Avoid "ERROR: flask has requirement click>=8.0, but you'll have click 7.0 which is incompatible"
+        sudo apt remove python3-click
         pip install -r $PWD/t/fuzzing/requirements.txt
 
-    - name: run simpleroute test
+    - name: run tests
       run: |
         python $PWD/t/fuzzing/simpleroute_test.py
-
-    - name: run serverless route test
-      run: |
         python $PWD/t/fuzzing/serverless_route_test.py
-
-    - name: run vars route test
-      run: |
         python $PWD/t/fuzzing/vars_route_test.py
-
-    - name: run check leak test
-      run: |
         python $PWD/t/fuzzing/client_abort.py
         python $PWD/t/fuzzing/simple_http.py
+        python $PWD/t/fuzzing/http_upstream.py
index 66615cf80cc1f0da39e8aca09b2aade8f7d869aa..41f3d46e4a1ea40e53cc818cac72c79dcf213d45 100644 (file)
@@ -28,7 +28,7 @@ jobs:
           - ubuntu-18.04
         os_name:
           - linux_openresty
-          - linux_openresty_1_17
+          - linux_openresty_1_19
 
     runs-on: ${{ matrix.platform }}
     timeout-minutes: 15
index 697a956512cb6782365c614c01dccd4d166cde65..55abed61cbc5550d9bedd0fe1096ba31f4ad593c 100644 (file)
@@ -32,6 +32,6 @@ jobs:
     steps:
       - uses: actions/checkout@v3
       - name: Check License Header
-        uses: apache/skywalking-eyes@v0.3.0
+        uses: apache/skywalking-eyes@v0.4.0
         env:
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
index 2338100168a79c50b553feb20fb28f999123684b..2ba48fdec53fa76486d2bad8220755e9ab74f83d 100644 (file)
@@ -32,7 +32,7 @@ jobs:
         uses: actions/checkout@v3
 
       - name: Setup Nodejs env
-        uses: actions/setup-node@v3.3.0
+        uses: actions/setup-node@v3.4.1
         with:
           node-version: '12'
 
diff --git a/.github/workflows/semantic.yml b/.github/workflows/semantic.yml
new file mode 100644 (file)
index 0000000..dc1a790
--- /dev/null
@@ -0,0 +1,35 @@
+name: "PR Lint"
+
+on:
+  pull_request_target:
+    types:
+      - opened
+      - edited
+      - synchronize
+
+jobs:
+  main:
+    name: Validate PR title
+    runs-on: ubuntu-latest
+    steps:
+      - name: Check out repository code
+        uses: actions/checkout@v3
+        with:
+          submodules: recursive
+      - uses: ./.github/actions/action-semantic-pull-request
+        env:
+          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+        with:
+          types: |
+            feat
+            fix
+            docs
+            style
+            refactor
+            perf
+            test
+            build
+            ci
+            chore
+            revert
+            change
index e85044671b894bf7010fe7551f6fc26357ca0281..8a5f90f8bfc64e8565d034c23dc166deffe0a343 100644 (file)
@@ -28,7 +28,7 @@ jobs:
           - ubuntu-18.04
         os_name:
           - linux_openresty
-          - linux_openresty_1_17
+          - linux_openresty_1_19
 
     runs-on: ${{ matrix.platform }}
     timeout-minutes: 15
index 33afe64aa1a2c9f9844ec68702ec4ecca1d6d9dc..25bc8265ab9457bca85a00fab53f88dbd7f73ea2 100644 (file)
@@ -77,6 +77,10 @@ t/fuzzing/__pycache__/
 boofuzz-results/
 *.pyc
 *.wasm
+t/grpc_server_example/grpc_server_example
+t/plugin/grpc-web/grpc-web-server
+t/plugin/grpc-web/node_modules/
+
 # release tar package
 *.tgz
 release/*
index beb354b89aa33e7acf68aa93211fe2807f527ec1..3c8ed44e4c5642113daab0a6e872b88e8681e2e8 100644 (file)
@@ -1,3 +1,6 @@
 [submodule "t/toolkit"]
        path = t/toolkit
        url = https://github.com/api7/test-toolkit.git
+[submodule ".github/actions/action-semantic-pull-request"]
+       path = .github/actions/action-semantic-pull-request
+       url = https://github.com/amannn/action-semantic-pull-request.git
index 85f1c69e4722f49151ed52827497c979a521aac2..ea5863015302f14b14e1d3a7ad6c4cd0e2fc1644 100644 (file)
@@ -19,7 +19,7 @@ header:
     spdx-id: Apache-2.0
     copyright-owner: Apache Software Foundation
 
-  license-location-threshold: 350
+  license-location-threshold: 360
 
   paths-ignore:
     - '.gitignore'
index 63e5737651b6c7dafac863bcd477d3e1d361cf52..35613f97141700c1a1e73b951d1b08c1a5b310c1 100644 (file)
@@ -23,6 +23,7 @@ title: Changelog
 
 ## Table of Contents
 
+- [2.15.0](#2150)
 - [2.14.1](#2141)
 - [2.14.0](#2140)
 - [2.13.2](#2132)
@@ -59,6 +60,55 @@ title: Changelog
 - [0.7.0](#070)
 - [0.6.0](#060)
 
+## 2.15.0
+
+### Change
+
+- We now map the grpc error code OUT_OF_RANGE to http code 400 in grpc-transcode plugin: [#7419](https://github.com/apache/apisix/pull/7419)
+- Rename health_check_retry configuration in etcd section of `config-default.yaml` to startup_retry: [#7304](https://github.com/apache/apisix/pull/7304)
+- Remove `upstream.enable_websocket` which is deprecated since 2020: [#7222](https://github.com/apache/apisix/pull/7222)
+
+### Core
+
+- Support running plugins conditionally: [#7453](https://github.com/apache/apisix/pull/7453)
+- Allow users to specify plugin execution priority: [#7273](https://github.com/apache/apisix/pull/7273)
+- Support getting upstream certificate from ssl object: [#7221](https://github.com/apache/apisix/pull/7221)
+- Allow customizing error response in the plugin: [#7128](https://github.com/apache/apisix/pull/7128)
+- Add metrics to xRPC Redis proxy: [#7183](https://github.com/apache/apisix/pull/7183)
+- Introduce deployment role to simplify the deployment of APISIX:
+    - [#7405](https://github.com/apache/apisix/pull/7405)
+    - [#7417](https://github.com/apache/apisix/pull/7417)
+    - [#7392](https://github.com/apache/apisix/pull/7392)
+    - [#7365](https://github.com/apache/apisix/pull/7365)
+    - [#7249](https://github.com/apache/apisix/pull/7249)
+
+### Plugin
+
+- Add ngx.shared.dict statistic in promethues plugin: [#7412](https://github.com/apache/apisix/pull/7412)
+- Allow using unescaped raw URL in proxy-rewrite plugin: [#7401](https://github.com/apache/apisix/pull/7401)
+- Add PKCE support to the openid-connect plugin: [#7370](https://github.com/apache/apisix/pull/7370)
+- Support custom log format in sls-logger plugin: [#7328](https://github.com/apache/apisix/pull/7328)
+- Export some params for kafka-client in kafka-logger plugin: [#7266](https://github.com/apache/apisix/pull/7266)
+- Add support for capturing OIDC refresh tokens in openid-connect plugin: [#7220](https://github.com/apache/apisix/pull/7220)
+- Add prometheus plugin in stream subsystem: [#7174](https://github.com/apache/apisix/pull/7174)
+
+### Bugfix
+
+- clear remain state from the latest try before retrying in Kubernetes discovery: [#7506](https://github.com/apache/apisix/pull/7506)
+- the query string was repeated twice when enabling both http_to_https and append_query_string in the redirect plugin: [#7433](https://github.com/apache/apisix/pull/7433)
+- don't send empty Authorization header by default in http-logger: [#7444](https://github.com/apache/apisix/pull/7444)
+- ensure both `group` and `disable` configurations can be used in limit-count: [#7384](https://github.com/apache/apisix/pull/7384)
+- adjust the execution priority of request-id so the tracing plugins can use the request id: [#7281](https://github.com/apache/apisix/pull/7281)
+- correct the transcode of repeated Message in grpc-transcode: [#7231](https://github.com/apache/apisix/pull/7231)
+- var missing in proxy-cache cache key should be ignored: [#7168](https://github.com/apache/apisix/pull/7168)
+- reduce memory usage when abnormal weights are given in chash: [#7103](https://github.com/apache/apisix/pull/7103)
+- cache should be bypassed when the method mismatch in proxy-cache: [#7111](https://github.com/apache/apisix/pull/7111)
+- Upstream keepalive should consider TLS param:
+    - [#7054](https://github.com/apache/apisix/pull/7054)
+    - [#7466](https://github.com/apache/apisix/pull/7466)
+- The redirect plugin sets a correct port during redirecting HTTP to HTTPS:
+    - [#7065](https://github.com/apache/apisix/pull/7065)
+
 ## 2.14.1
 
 ### Bugfix
index cc91824d0045d5583df4fa145741953eaa64b239..795aa8c665ef01e8afda7f711e9bd9ba0d89a7f0 100644 (file)
@@ -26,8 +26,7 @@
 2. Create a [pull request](https://github.com/apache/apisix/commit/21d7673c6e8ff995677456cdebc8ded5afbb3d0a) (contains the backport commits, and the change in step 1) to minor branch
    > This should include those PRs that contain the `need backport` tag since the last patch release. Also, the title of these PRs need to be added to the changelog of the minor branch.
 3. Merge it into minor branch
-4. Package a vote artifact to Apache's dev-apisix repo. The artifact can be created
-via `VERSION=x.y.z make release-src`
+4. Package a vote artifact to Apache's dev-apisix repo. The artifact can be created via `VERSION=x.y.z make release-src`
 5. Send the [vote email](https://lists.apache.org/thread/vq4qtwqro5zowpdqhx51oznbjy87w9d0) to dev@apisix.apache.org
    > After executing the `VERSION=x.y.z make release-src` command, the content of the vote email will be automatically generated in the `./release` directory named `apache-apisix-${x.y.z}-vote-contents`
 6. When the vote is passed, send the [vote result email](https://lists.apache.org/thread/k2frnvj4zj9oynsbr7h7nd6n6m3q5p89) to dev@apisix.apache.org
@@ -38,15 +37,15 @@ via `VERSION=x.y.z make release-src`
 11. Update APISIX rpm package
     > Go to [apisix-build-tools](https://github.com/api7/apisix-build-tools) repository and create a new tag named `apisix-${x.y.z}` to automatically submit the
     package to yum repo
-12. First, update [APISIX docker](https://github.com/apache/apisix-docker/commit/829d45559c303bea7edde5bebe9fcf4938071601) in [APISIX docker repository](https://github.com/apache/apisix-docker), after PR merged, then create a new branch from master, named as `release/apisix-${version}`, e.g. `release/apisix-2.10.2`
+12. - If the version number is the largest, update [APISIX docker](https://github.com/apache/apisix-docker/commit/829d45559c303bea7edde5bebe9fcf4938071601) in [APISIX docker repository](https://github.com/apache/apisix-docker), after PR merged, then create a new branch from master, named as `release/apisix-${version}`, e.g. `release/apisix-2.10.2`.
+    - If released an LTS version and the version number less than the current largest(e.g. the current largest version number is 2.14.1, but the LTS version 2.13.2 is to be released), submit a PR like [APISIX docker](https://github.com/apache/apisix-docker/pull/322) in [APISIX docker repository](https://github.com/apache/apisix-docker) and named as `release/apisix-${version}`, e.g. `release/apisix-2.13.2`, after PR reviewed, don't need to merged PR, just close the PR and push the branch to APISIX docker repository.
 13. Update [APISIX helm chart](https://github.com/apache/apisix-helm-chart/pull/234) if the version number is the largest
 14. Send the [ANNOUNCE email](https://lists.apache.org/thread.html/ree7b06e6eac854fd42ba4f302079661a172f514a92aca2ef2f1aa7bb%40%3Cdev.apisix.apache.org%3E) to dev@apisix.apache.org & announce@apache.org
 
 ### Release minor version
 
 1. Create a minor branch, and create [pull request](https://github.com/apache/apisix/commit/bc6ddf51f15e41fffea6c5bd7d01da9838142b66) to master branch from it
-2. Package a vote artifact to Apache's dev-apisix repo. The artifact can be created
-via `VERSION=x.y.z make release-src`
+2. Package a vote artifact to Apache's dev-apisix repo. The artifact can be created via `VERSION=x.y.z make release-src`
 3. Send the [vote email](https://lists.apache.org/thread/q8zq276o20r5r9qjkg074nfzb77xwry9) to dev@apisix.apache.org
    > After executing the `VERSION=x.y.z make release-src` command, the content of the vote email will be automatically generated in the `./release` directory named `apache-apisix-${x.y.z}-vote-contents`
 4. When the vote is passed, send the [vote result email](https://lists.apache.org/thread/p1m9s116rojlhb91g38cj8646393qkz7) to dev@apisix.apache.org
@@ -57,6 +56,7 @@ via `VERSION=x.y.z make release-src`
 9. Update [APISIX's website](https://github.com/apache/apisix-website/commit/7bf0ab5a1bbd795e6571c4bb89a6e646115e7ca3)
 10. Update APISIX rpm package.
     > Go to [apisix-build-tools](https://github.com/api7/apisix-build-tools) repository and create a new tag named `apisix-${x.y.z}` to automatically submit the rpm package to yum repo
-11. First, Update [APISIX docker](https://github.com/apache/apisix-docker/commit/829d45559c303bea7edde5bebe9fcf4938071601) in [APISIX docker repository](https://github.com/apache/apisix-docker), after PR merged, then create a new branch from master, named as `release/apisix-${version}`, e.g. `release/apisix-2.10.2`
+11. - If the version number is the largest, update [APISIX docker](https://github.com/apache/apisix-docker/commit/829d45559c303bea7edde5bebe9fcf4938071601) in [APISIX docker repository](https://github.com/apache/apisix-docker), after PR merged, then create a new branch from master, named as `release/apisix-${version}`, e.g. `release/apisix-2.10.2`.
+    - If released an LTS version and the version number less than the current largest(e.g. the current largest version number is 2.14.1, but the LTS version 2.13.2 is to be released), submit a PR like [APISIX docker](https://github.com/apache/apisix-docker/pull/322) in [APISIX docker repository](https://github.com/apache/apisix-docker) and named as `release/apisix-${version}`, e.g. `release/apisix-2.13.2`, after PR reviewed, don't need to merged PR, just close the PR and push the branch to APISIX docker repository.
 12. Update [APISIX helm chart](https://github.com/apache/apisix-helm-chart/pull/234)
 13. Send the [ANNOUNCE email](https://lists.apache.org/thread/4s4msqwl1tq13p9dnv3hx7skbgpkozw1) to dev@apisix.apache.org & announce@apache.org
index 6c82a6a94341b30054a49858b98abe11d3730fe9..5a575d340c6c593a33e4c8e050cdc224ba21e62c 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -243,7 +243,7 @@ clean:
 .PHONY: reload
 reload: runtime
        @$(call func_echo_status, "$@ -> [ Start ]")
-       $(ENV_NGINX) -s reload
+       $(ENV_APISIX) reload
        @$(call func_echo_success_status, "$@ -> [ Done ]")
 
 
index e28bef917c13be5381a001bb46c4936c1ceb1246..06fe248db280459ffaa676e9b3638b43b983b313 100644 (file)
--- a/README.md
+++ b/README.md
@@ -75,7 +75,7 @@ A/B testing, canary release, blue-green deployment, limit rate, defense against
 - **Full Dynamic**
 
   - [Hot Updates And Hot Plugins](docs/en/latest/terminology/plugin.md): Continuously updates its configurations and plugins without restarts!
-  - [Proxy Rewrite](docs/en/latest/plugins/proxy-rewrite.md): Support rewrite the `host`, `uri`, `schema`, `enable_websocket`, `headers` of the request before send to upstream.
+  - [Proxy Rewrite](docs/en/latest/plugins/proxy-rewrite.md): Support rewrite the `host`, `uri`, `schema`, `method`, `headers` of the request before send to upstream.
   - [Response Rewrite](docs/en/latest/plugins/response-rewrite.md): Set customized response status code, body and header to the client.
   - Dynamic Load Balancing: Round-robin load balancing with weight.
   - Hash-based Load Balancing: Load balance with consistent hashing sessions.
@@ -199,7 +199,7 @@ Using AWS's eight-core server, APISIX's QPS reaches 140,000 with a latency of on
 
 - [European eFactory Platform: API Security Gateway – Using APISIX in the eFactory Platform](https://www.efactory-project.eu/post/api-security-gateway-using-apisix-in-the-efactory-platform)
 - [Copernicus Reference System Software](https://github.com/COPRS/infrastructure/wiki/Networking-trade-off)
-- [More Stories](https://apisix.apache.org/blog/tags/user-case)
+- [More Stories](https://apisix.apache.org/blog/tags/case-studies/)
 
 ## Who Uses APISIX API Gateway?
 
diff --git a/Vision-and-Milestones.md b/Vision-and-Milestones.md
new file mode 100644 (file)
index 0000000..333d991
--- /dev/null
@@ -0,0 +1,40 @@
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+### Vision
+
+Apache APISIX is an open source API gateway designed to help developers connect any APIs securely and efficiently in any environment.
+
+Managing thousands or tens of thousands of APIs and microservices in a multi-cloud and hybrid cloud environment is not an easy task.
+There will be many challenges as authentication, observability, security, etc.
+
+Apache APISIX, a community-driven project, hopes to help everyone better manage and use APIs through the power of developers.
+Every developer's contribution will used by thousands of companies and served by billions of users.
+
+### Milestones
+
+Apache APISIX has relatively complete features for north-south traffic,
+and will be iterated around the following directions in the next 6 months (if you have any ideas, feel free to create issue to discuss):
+
+- More complete support for Gateway API on APISIX ingress controller
+- Add support for service mesh
+- User-friendly documentation
+- More plugins for public cloud and SaaS services
+- Java/Go plugins and Wasm production-ready
+- Add dynamic debugging tools for Apache APISIX
index 46b23de09bdb95097d812c1da922fc7e7b2908d0..e9456a31b9d2a45f426fcc2442d7594734c137d9 100644 (file)
@@ -18,6 +18,7 @@ local core    = require("apisix.core")
 local plugins = require("apisix.admin.plugins")
 local utils   = require("apisix.admin.utils")
 local plugin  = require("apisix.plugin")
+local v3_adapter = require("apisix.admin.v3_adapter")
 local pairs   = pairs
 
 local _M = {
@@ -102,6 +103,7 @@ function _M.get(consumer_name)
     end
 
     utils.fix_count(res.body, consumer_name)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index c4dd4ca93380a4d6217230754add0bd73b33623b..5cec0604fb1eba95dcd36a55900cfcb1f65b29f9 100644 (file)
@@ -17,6 +17,7 @@
 local core = require("apisix.core")
 local utils = require("apisix.admin.utils")
 local schema_plugin = require("apisix.admin.plugins").check_schema
+local v3_adapter = require("apisix.admin.v3_adapter")
 local type = type
 local tostring = tostring
 
@@ -97,6 +98,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index 318348ecd4ab16c7139020cebe50f9f2df61b8bc..fbbfd2d57fc4b3b1c4680d3b23740d6fbc8ff7b7 100644 (file)
@@ -18,6 +18,7 @@ local require = require
 local core = require("apisix.core")
 local route = require("apisix.utils.router")
 local plugin = require("apisix.plugin")
+local v3_adapter = require("apisix.admin.v3_adapter")
 local ngx = ngx
 local get_method = ngx.req.get_method
 local ngx_time = ngx.time
@@ -46,9 +47,9 @@ local resources = {
     upstreams       = require("apisix.admin.upstreams"),
     consumers       = require("apisix.admin.consumers"),
     schema          = require("apisix.admin.schema"),
-    ssl             = require("apisix.admin.ssl"),
+    ssls            = require("apisix.admin.ssl"),
     plugins         = require("apisix.admin.plugins"),
-    proto           = require("apisix.admin.proto"),
+    protos          = require("apisix.admin.proto"),
     global_rules    = require("apisix.admin.global_rules"),
     stream_routes   = require("apisix.admin.stream_routes"),
     plugin_metadata = require("apisix.admin.plugin_metadata"),
@@ -186,6 +187,11 @@ local function run()
     local code, data = resource[method](seg_id, req_body, seg_sub_path,
                                         uri_args)
     if code then
+        if v3_adapter.enable_v3() then
+            core.response.set_header("X-API-VERSION", "v3")
+        else
+            core.response.set_header("X-API-VERSION", "v2")
+        end
         data = strip_etcd_resp(data)
         core.response.exit(code, data)
     end
index bcf199fcd27c085e5e80522f3ae0e3c25f393a5c..06c100cdf2277781e0e746cfbb6513f5f505563e 100644 (file)
@@ -18,6 +18,7 @@ local core = require("apisix.core")
 local get_routes = require("apisix.router").http_routes
 local utils = require("apisix.admin.utils")
 local schema_plugin = require("apisix.admin.plugins").check_schema
+local v3_adapter = require("apisix.admin.v3_adapter")
 local type = type
 local tostring = tostring
 local ipairs = ipairs
@@ -97,6 +98,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index 132db68a14060745d52fee2124ac65928af6106e..e1007956575fa5f1db3d56ca6d04f96ad534e380 100644 (file)
@@ -21,6 +21,7 @@ local utils = require("apisix.admin.utils")
 local get_routes = require("apisix.router").http_routes
 local get_services = require("apisix.http.service").services
 local compile_proto = require("apisix.plugins.grpc-transcode.proto").compile_proto
+local v3_adapter = require("apisix.admin.v3_adapter")
 local tostring = tostring
 
 
@@ -69,7 +70,7 @@ function _M.put(id, conf)
         return 400, err
     end
 
-    local key = "/proto/" .. id
+    local key = "/protos/" .. id
 
     local ok, err = utils.inject_conf_with_prev_conf("proto", key, conf)
     if not ok then
@@ -87,7 +88,7 @@ end
 
 
 function _M.get(id)
-    local key = "/proto"
+    local key = "/protos"
     if id then
         key = key .. "/" .. id
     end
@@ -99,6 +100,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
@@ -109,7 +111,7 @@ function _M.post(id, conf)
         return 400, err
     end
 
-    local key = "/proto"
+    local key = "/protos"
     utils.inject_timestamp(conf)
     local res, err = core.etcd.push(key, conf)
     if not res then
@@ -181,7 +183,7 @@ function _M.delete(id)
     end
     core.log.info("proto delete service ref check pass: ", id)
 
-    local key = "/proto/" .. id
+    local key = "/protos/" .. id
     -- core.log.info("key: ", key)
     local res, err = core.etcd.delete(key)
     if not res then
index 877f6cf5e2c1c09365f862797306646f20d1be63..ccfe0bf9517e9aea24f45d5f752ba35a8d0338f7 100644 (file)
@@ -19,6 +19,7 @@ local core = require("apisix.core")
 local apisix_upstream = require("apisix.upstream")
 local schema_plugin = require("apisix.admin.plugins").check_schema
 local utils = require("apisix.admin.utils")
+local v3_adapter = require("apisix.admin.v3_adapter")
 local tostring = tostring
 local type = type
 local loadstring = loadstring
@@ -203,6 +204,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index 59c53eec3c6faca0a37633041ddc23da72b786f4..1872d5bb7ec193f8bbe816f436840dab431f5bd4 100644 (file)
@@ -19,6 +19,7 @@ local get_routes = require("apisix.router").http_routes
 local apisix_upstream = require("apisix.upstream")
 local schema_plugin = require("apisix.admin.plugins").check_schema
 local utils = require("apisix.admin.utils")
+local v3_adapter = require("apisix.admin.v3_adapter")
 local tostring = tostring
 local ipairs = ipairs
 local type = type
@@ -146,6 +147,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index 9a73107c9f10de860111d4d342c8b14786a58ec6..54d74e9617e3464b9dd8b5c202f5d7ac7275060e 100644 (file)
@@ -17,6 +17,7 @@
 local core              = require("apisix.core")
 local utils             = require("apisix.admin.utils")
 local apisix_ssl        = require("apisix.ssl")
+local v3_adapter        = require("apisix.admin.v3_adapter")
 local tostring          = tostring
 local type              = type
 
@@ -72,7 +73,7 @@ function _M.put(id, conf)
         end
     end
 
-    local key = "/ssl/" .. id
+    local key = "/ssls/" .. id
 
     local ok, err = utils.inject_conf_with_prev_conf("ssl", key, conf)
     if not ok then
@@ -90,7 +91,7 @@ end
 
 
 function _M.get(id)
-    local key = "/ssl"
+    local key = "/ssls"
     if id then
         key = key .. "/" .. id
     end
@@ -107,6 +108,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
@@ -126,7 +128,7 @@ function _M.post(id, conf)
         end
     end
 
-    local key = "/ssl"
+    local key = "/ssls"
     utils.inject_timestamp(conf)
     local res, err = core.etcd.push(key, conf)
     if not res then
@@ -143,7 +145,7 @@ function _M.delete(id)
         return 400, {error_msg = "missing ssl id"}
     end
 
-    local key = "/ssl/" .. id
+    local key = "/ssls/" .. id
     -- core.log.info("key: ", key)
     local res, err = core.etcd.delete(key)
     if not res then
@@ -168,7 +170,7 @@ function _M.patch(id, conf, sub_path)
         return 400, {error_msg = "invalid configuration"}
     end
 
-    local key = "/ssl"
+    local key = "/ssls"
     if id then
         key = key .. "/" .. id
     end
index 6770830acf1fdaa92d07e90d21c41c1f2e7059d8..625911c04a0349238a7c7c6a90faab631159a183 100644 (file)
@@ -17,6 +17,7 @@
 local core = require("apisix.core")
 local utils = require("apisix.admin.utils")
 local stream_route_checker = require("apisix.stream.router.ip_port").stream_route_checker
+local v3_adapter = require("apisix.admin.v3_adapter")
 local tostring = tostring
 
 
@@ -114,6 +115,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index 5aec652691f3b74d84779242a01a35db5e9eac69..d262f3977439e59f6ca0b682b9ab35ee0c1d41c5 100644 (file)
@@ -19,6 +19,7 @@ local get_routes = require("apisix.router").http_routes
 local get_services = require("apisix.http.service").services
 local apisix_upstream = require("apisix.upstream")
 local utils = require("apisix.admin.utils")
+local v3_adapter = require("apisix.admin.v3_adapter")
 local tostring = tostring
 local ipairs = ipairs
 local type = type
@@ -99,6 +100,7 @@ function _M.get(id)
     end
 
     utils.fix_count(res.body, id)
+    v3_adapter.filter(res.body)
     return res.status, res.body
 end
 
index 3ff695a473b6beb78773877f715ae377eaed44a0..db73dda6751f5b9130aefc46eda6944fafba7680 100644 (file)
@@ -24,8 +24,8 @@ local _M = {}
 
 local function inject_timestamp(conf, prev_conf, patch_conf)
     if not conf.create_time then
-        if prev_conf and prev_conf.node.value.create_time then
-            conf.create_time = prev_conf.node.value.create_time
+        if prev_conf and (prev_conf.node or prev_conf.list).value.create_time then
+            conf.create_time = (prev_conf.node or prev_conf.list).value.create_time
         else
             -- As we don't know existent data's create_time, we have to pretend
             -- they are created now.
diff --git a/apisix/admin/v3_adapter.lua b/apisix/admin/v3_adapter.lua
new file mode 100644 (file)
index 0000000..aa9226a
--- /dev/null
@@ -0,0 +1,189 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements.  See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License.  You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+local fetch_local_conf  = require("apisix.core.config_local").local_conf
+local try_read_attr     = require("apisix.core.table").try_read_attr
+local log               = require("apisix.core.log")
+local request           = require("apisix.core.request")
+local response          = require("apisix.core.response")
+local table             = require("apisix.core.table")
+local tonumber          = tonumber
+local re_find           = ngx.re.find
+local pairs             = pairs
+
+local _M = {}
+
+
+local admin_api_version
+local function enable_v3()
+    if admin_api_version then
+        if admin_api_version == "v3" then
+            return true
+        end
+
+        if admin_api_version == "default" then
+            return false
+        end
+    end
+
+    local local_conf, err = fetch_local_conf()
+    if not local_conf then
+        admin_api_version = "default"
+        log.error("failed to fetch local conf: ", err)
+        return false
+    end
+
+    local api_ver = try_read_attr(local_conf, "apisix", "admin_api_version")
+    if api_ver ~= "v3" then
+        admin_api_version = "default"
+        return false
+    end
+
+    admin_api_version = api_ver
+    return true
+end
+_M.enable_v3 = enable_v3
+
+
+function _M.to_v3(body, action)
+    if not enable_v3() then
+        body.action = action
+    end
+end
+
+
+function _M.to_v3_list(body)
+    if not enable_v3() then
+        return
+    end
+
+    if body.node.dir then
+        body.list = body.node.nodes
+        body.node = nil
+    end
+end
+
+
+local function sort(l, r)
+    return l.createdIndex < r.createdIndex
+end
+
+
+local function pagination(body, args)
+    args.page = tonumber(args.page)
+    args.page_size = tonumber(args.page_size)
+    if not args.page or not args.page_size then
+        return
+    end
+
+    if args.page_size < 10 or args.page_size > 500 then
+        return response.exit(400, "page_size must be between 10 and 500")
+    end
+
+    if not args.page or args.page < 1 then
+        -- default page is 1
+        args.page = 1
+    end
+
+    local list = body.list
+
+    -- sort nodes by there createdIndex
+    table.sort(list, sort)
+
+    local to = args.page * args.page_size
+    local from =  to - args.page_size + 1
+
+    local res = table.new(20, 0)
+
+    for i = from, to do
+        if list[i] then
+            res[i - from + 1] = list[i]
+        end
+    end
+
+    body.list = res
+end
+
+
+local function filter(body, args)
+    if not args.name and not args.label and not args.uri then
+        return
+    end
+
+    for i = #body.list, 1, -1 do
+        local name_matched = true
+        local label_matched = true
+        local uri_matched = true
+        if args.name then
+            name_matched = false
+            local matched = re_find(body.list[i].value.name, args.name, "jo")
+            if matched then
+                name_matched = true
+            end
+        end
+
+        if args.label then
+            label_matched = false
+            if body.list[i].value.labels then
+                for k, _ in pairs(body.list[i].value.labels) do
+                    if k == args.label then
+                        label_matched = true
+                        break
+                    end
+                end
+            end
+        end
+
+        if args.uri then
+            uri_matched = false
+            if body.list[i].value.uri then
+                local matched = re_find(body.list[i].value.uri, args.uri, "jo")
+                if matched then
+                    uri_matched = true
+                end
+            end
+
+            if body.list[i].value.uris then
+                for _, uri in pairs(body.list[i].value.uris) do
+                    if re_find(uri, args.uri, "jo") then
+                        uri_matched = true
+                        break
+                    end
+                end
+            end
+        end
+
+        if not name_matched or not label_matched or not uri_matched then
+            table.remove(body.list, i)
+        end
+    end
+end
+
+
+function _M.filter(body)
+    if not enable_v3() then
+        return
+    end
+
+    local args = request.get_uri_args()
+
+    pagination(body, args)
+    filter(body, args)
+end
+
+
+return _M
index 4dd387400533b9038b3d055ac39e7d84fa7c4305..462d04f07ad24bfded9162cbf833d7b1cdd8ec19 100644 (file)
@@ -26,6 +26,7 @@ local set_more_tries   = balancer.set_more_tries
 local get_last_failure = balancer.get_last_failure
 local set_timeouts     = balancer.set_timeouts
 local ngx_now          = ngx.now
+local str_byte         = string.byte
 
 
 local module_name = "balancer"
@@ -195,6 +196,12 @@ local function pick_server(route, ctx)
     core.log.info("ctx: ", core.json.delay_encode(ctx, true))
     local up_conf = ctx.upstream_conf
 
+    for _, node in ipairs(up_conf.nodes) do
+        if core.utils.parse_ipv6(node.host) and str_byte(node.host, 1) ~= str_byte("[") then
+            node.host = '[' .. node.host .. ']'
+        end
+    end
+
     local nodes_count = #up_conf.nodes
     if nodes_count == 1 then
         local node = up_conf.nodes[1]
@@ -302,6 +309,7 @@ do
             local size = keepalive_pool.size
             local requests = keepalive_pool.requests
 
+            core.table.clear(pool_opt)
             pool_opt.pool_size = size
 
             local scheme = up_conf.scheme
@@ -358,7 +366,7 @@ function _M.run(route, ctx, plugin_funcs)
 
         local header_changed
         local pass_host = ctx.pass_host
-        if pass_host == "node" and balancer.recreate_request then
+        if pass_host == "node" then
             local host = server.upstream_host
             if host ~= ctx.var.upstream_host then
                 -- retried node has a different host
@@ -369,7 +377,7 @@ function _M.run(route, ctx, plugin_funcs)
 
         local _, run = plugin_funcs("before_proxy")
         -- always recreate request as the request may be changed by plugins
-        if (run or header_changed) and balancer.recreate_request then
+        if run or header_changed then
             balancer.recreate_request()
         end
     end
index d284e20848ddca670ad4ca7d32fd31f69822bd69..079691f51a04804eff5369b51ff6aa74e466489f 100755 (executable)
 local pkg_cpath_org = package.cpath
 local pkg_path_org = package.path
 
+local _, find_pos_end = string.find(pkg_path_org, ";", -1, true)
+if not find_pos_end then
+    pkg_path_org = pkg_path_org .. ";"
+end
+
 local apisix_home = "/usr/local/apisix"
 local pkg_cpath = apisix_home .. "/deps/lib64/lua/5.1/?.so;"
                   .. apisix_home .. "/deps/lib/lua/5.1/?.so;"
-local pkg_path = apisix_home .. "/deps/share/lua/5.1/?.lua;"
+local pkg_path_deps = apisix_home .. "/deps/share/lua/5.1/?.lua;"
+local pkg_path_env = apisix_home .. "/?.lua;"
 
 -- modify the load path to load our dependencies
 package.cpath = pkg_cpath .. pkg_cpath_org
-package.path  = pkg_path .. pkg_path_org
+package.path  = pkg_path_deps .. pkg_path_org .. pkg_path_env
 
 -- pass path to construct the final result
 local env = require("apisix.cli.env")(apisix_home, pkg_cpath_org, pkg_path_org)
index 3c78ab3c11d21369667db97d905ae91cc4c2e1ca..f0e1a36e7e8891542d8c5efd8882fc7bae91cda2 100644 (file)
@@ -82,7 +82,7 @@ return function (apisix_home, pkg_cpath_org, pkg_path_org)
     -- pre-transform openresty path
     res, err = util.execute_cmd("command -v openresty")
     if not res then
-        error("failed to exec ulimit cmd \'command -v openresty\', err: " .. err)
+        error("failed to exec cmd \'command -v openresty\', err: " .. err)
     end
     local openresty_path_abs = util.trim(res)
 
index 9c528005e1fd87faa0545ceb0f345c79c6ca3e55..85207233bfdf9f0522f724a66f0f33c39eb33c0b 100644 (file)
@@ -251,11 +251,33 @@ function _M.read_yaml_conf(apisix_home)
         end
     end
 
-    if default_conf.deployment
-        and default_conf.deployment.role == "traditional"
-        and default_conf.deployment.etcd
-    then
-        default_conf.etcd = default_conf.deployment.etcd
+    if default_conf.deployment then
+        if default_conf.deployment.role == "traditional" then
+            default_conf.etcd = default_conf.deployment.etcd
+
+        elseif default_conf.deployment.role == "control_plane" then
+            default_conf.etcd = default_conf.deployment.etcd
+            default_conf.apisix.enable_admin = true
+
+        elseif default_conf.deployment.role == "data_plane" then
+            if default_conf.deployment.role_data_plane.config_provider == "yaml" then
+                default_conf.apisix.config_center = "yaml"
+            else
+                default_conf.etcd = default_conf.deployment.role_data_plane.control_plane
+            end
+            default_conf.apisix.enable_admin = false
+        end
+
+        if default_conf.etcd and default_conf.deployment.certs then
+            -- copy certs configuration to keep backward compatible
+            local certs = default_conf.deployment.certs
+            local etcd = default_conf.etcd
+            if not etcd.tls then
+                etcd.tls = {}
+            end
+            etcd.tls.cert = certs.cert
+            etcd.tls.key = certs.cert_key
+        end
     end
 
     return default_conf
index f22280766982b961073b1940f7ba31b7d5dead36..0f5d84b17fedfcb63f5b0b8f8d3802ea6131c046 100644 (file)
@@ -66,6 +66,12 @@ lua {
 
 {% if enabled_stream_plugins["prometheus"] and not enable_http then %}
 http {
+    lua_package_path  "{*extra_lua_path*}$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;]=]
+                       .. [=[{*apisix_lua_home*}/?.lua;{*apisix_lua_home*}/?/init.lua;;{*lua_path*};";
+    lua_package_cpath "{*extra_lua_cpath*}$prefix/deps/lib64/lua/5.1/?.so;]=]
+                      .. [=[$prefix/deps/lib/lua/5.1/?.so;;]=]
+                      .. [=[{*lua_cpath*};";
+
     {% if enabled_stream_plugins["prometheus"] then %}
     init_worker_by_lua_block {
         require("apisix.plugins.prometheus.exporter").http_init(true)
@@ -302,28 +308,6 @@ http {
     {% end %}
     {% end %}
 
-    {% if enabled_plugins["proxy-cache"] then %}
-    # for proxy cache
-    {% for _, cache in ipairs(proxy_cache.zones) do %}
-    {% if cache.disk_path and cache.cache_levels and cache.disk_size then %}
-    proxy_cache_path {* cache.disk_path *} levels={* cache.cache_levels *} keys_zone={* cache.name *}:{* cache.memory_size *} inactive=1d max_size={* cache.disk_size *} use_temp_path=off;
-    {% else %}
-    lua_shared_dict {* cache.name *} {* cache.memory_size *};
-    {% end %}
-    {% end %}
-    {% end %}
-
-    {% if enabled_plugins["proxy-cache"] then %}
-    # for proxy cache
-    map $upstream_cache_zone $upstream_cache_zone_info {
-    {% for _, cache in ipairs(proxy_cache.zones) do %}
-    {% if cache.disk_path and cache.cache_levels and cache.disk_size then %}
-        {* cache.name *} {* cache.disk_path *},{* cache.cache_levels *};
-    {% end %}
-    {% end %}
-    }
-    {% end %}
-
     {% if enabled_plugins["error-log-logger"] then %}
         lua_capture_error_log  10m;
     {% end %}
@@ -467,11 +451,9 @@ http {
         apisix.http_init_worker()
     }
 
-    {% if not use_openresty_1_17 then %}
     exit_worker_by_lua_block {
         apisix.http_exit_worker()
     }
-    {% end %}
 
     {% if enable_control then %}
     server {
@@ -580,6 +562,27 @@ http {
     {* conf_server *}
     {% end %}
 
+    {% if deployment_role ~= "control_plane" then %}
+
+    {% if enabled_plugins["proxy-cache"] then %}
+    # for proxy cache
+    {% for _, cache in ipairs(proxy_cache.zones) do %}
+    {% if cache.disk_path and cache.cache_levels and cache.disk_size then %}
+    proxy_cache_path {* cache.disk_path *} levels={* cache.cache_levels *} keys_zone={* cache.name *}:{* cache.memory_size *} inactive=1d max_size={* cache.disk_size *} use_temp_path=off;
+    {% else %}
+    lua_shared_dict {* cache.name *} {* cache.memory_size *};
+    {% end %}
+    {% end %}
+
+    map $upstream_cache_zone $upstream_cache_zone_info {
+    {% for _, cache in ipairs(proxy_cache.zones) do %}
+    {% if cache.disk_path and cache.cache_levels and cache.disk_size then %}
+        {* cache.name *} {* cache.disk_path *},{* cache.cache_levels *};
+    {% end %}
+    {% end %}
+    }
+    {% end %}
+
     server {
         {% for _, item in ipairs(node_listen) do %}
         listen {* item.ip *}:{* item.port *} default_server {% if item.enable_http2 then %} http2 {% end %} {% if enable_reuseport then %} reuseport {% end %};
@@ -852,6 +855,8 @@ http {
             }
         }
     }
+    {% end %}
+
     # http end configuration snippet starts
     {% if http_end_configuration_snippet then %}
     {* http_end_configuration_snippet *}
index d2275bed5813ab6274299fe418e9465c0787f510..2cd72837adb3f96d2e6a7d36a937488c704ce6f0 100644 (file)
@@ -235,16 +235,11 @@ Please modify "admin_key" in conf/config.yaml .
         util.die("can not find openresty\n")
     end
 
-    local need_ver = "1.17.8"
+    local need_ver = "1.19.3"
     if not version_greater_equal(or_ver, need_ver) then
         util.die("openresty version must >=", need_ver, " current ", or_ver, "\n")
     end
 
-    local use_openresty_1_17 = false
-    if not version_greater_equal(or_ver, "1.19.3") then
-        use_openresty_1_17 = true
-    end
-
     local or_info = util.execute_cmd("openresty -V 2>&1")
     if or_info and not or_info:find("http_stub_status_module", 1, true) then
         util.die("'http_stub_status_module' module is missing in ",
@@ -546,16 +541,22 @@ Please modify "admin_key" in conf/config.yaml .
     end
 
     if yaml_conf.deployment and yaml_conf.deployment.role then
-        env.deployment_role = yaml_conf.deployment.role
+        local role = yaml_conf.deployment.role
+        env.deployment_role = role
+
+        if role == "control_plane" and not admin_server_addr then
+            local listen = node_listen[1]
+            admin_server_addr = str_format("%s:%s", listen.ip, listen.port)
+        end
     end
 
     -- Using template.render
     local sys_conf = {
-        use_openresty_1_17 = use_openresty_1_17,
         lua_path = env.pkg_path_org,
         lua_cpath = env.pkg_cpath_org,
         os_name = util.trim(util.execute_cmd("uname")),
         apisix_lua_home = env.apisix_home,
+        deployment_role = env.deployment_role,
         use_apisix_openresty = use_apisix_openresty,
         error_log = {level = "warn"},
         enable_http = enable_http,
index db4f47477de52a1dfcdf17fb88c25be2cb8deaef..fa76326a943641391a079cdc75a43ca1e7ec9833 100644 (file)
@@ -56,6 +56,12 @@ local etcd_schema = {
                 pattern = [[^https?://]]
             },
             minItems = 1,
+        },
+        timeout = {
+            type = "integer",
+            default = 30,
+            minimum = 1,
+            description = "etcd connection timeout in seconds",
         }
     },
     required = {"prefix", "host"}
@@ -272,9 +278,82 @@ local deployment_schema = {
     traditional = {
         properties = {
             etcd = etcd_schema,
+            role_traditional = {
+                properties = {
+                    config_provider = {
+                        enum = {"etcd"}
+                    },
+                },
+                required = {"config_provider"}
+            }
         },
         required = {"etcd"}
     },
+    control_plane = {
+        properties = {
+            etcd = etcd_schema,
+            role_control_plane = {
+                properties = {
+                    config_provider = {
+                        enum = {"etcd"}
+                    },
+                    conf_server = {
+                        properties = {
+                            listen = {
+                                type = "string",
+                                default = "0.0.0.0:9280",
+                            },
+                            cert = { type = "string" },
+                            cert_key = { type = "string" },
+                            client_ca_cert = { type = "string" },
+                        },
+                        required = {"cert", "cert_key"}
+                    },
+                },
+                required = {"config_provider", "conf_server"}
+            },
+            certs = {
+                properties = {
+                    cert = { type = "string" },
+                    cert_key = { type = "string" },
+                    trusted_ca_cert = { type = "string" },
+                },
+                dependencies = {
+                    cert = {
+                        required = {"cert_key"},
+                    },
+                },
+                default = {},
+            },
+        },
+        required = {"etcd", "role_control_plane"}
+    },
+    data_plane = {
+        properties = {
+            role_data_plane = {
+                properties = {
+                    config_provider = {
+                        enum = {"control_plane", "yaml"}
+                    },
+                },
+                required = {"config_provider"}
+            },
+            certs = {
+                properties = {
+                    cert = { type = "string" },
+                    cert_key = { type = "string" },
+                    trusted_ca_cert = { type = "string" },
+                },
+                dependencies = {
+                    cert = {
+                        required = {"cert_key"},
+                    },
+                },
+                default = {},
+            },
+        },
+        required = {"role_data_plane"}
+    }
 }
 
 
index bfaf973a026ca5c736ee5bd52001f550d74ca439..6c2414c34311dedd07274c79ba72544086bdc703 100644 (file)
@@ -24,7 +24,10 @@ local _M = {}
 
 
 function _M.generate_conf_server(env, conf)
-    if not (conf.deployment and conf.deployment.role == "traditional") then
+    if not (conf.deployment and (
+        conf.deployment.role == "traditional" or
+        conf.deployment.role == "control_plane"))
+    then
         return nil, nil
     end
 
@@ -49,6 +52,24 @@ function _M.generate_conf_server(env, conf)
         end
     end
 
+    local control_plane
+    if conf.deployment.role == "control_plane" then
+        control_plane = conf.deployment.role_control_plane.conf_server
+        control_plane.cert = pl_path.abspath(control_plane.cert)
+        control_plane.cert_key = pl_path.abspath(control_plane.cert_key)
+
+        if control_plane.client_ca_cert then
+            control_plane.client_ca_cert = pl_path.abspath(control_plane.client_ca_cert)
+        end
+    end
+
+    local trusted_ca_cert
+    if conf.deployment.certs then
+        if conf.deployment.certs.trusted_ca_cert then
+            trusted_ca_cert = pl_path.abspath(conf.deployment.certs.trusted_ca_cert)
+        end
+    end
+
     local conf_render = template.compile([[
     upstream apisix_conf_backend {
         server 0.0.0.0:80;
@@ -57,8 +78,26 @@ function _M.generate_conf_server(env, conf)
             conf_server.balancer()
         }
     }
+
+    {% if trusted_ca_cert then %}
+    lua_ssl_trusted_certificate {* trusted_ca_cert *};
+    {% end %}
+
     server {
+        {% if control_plane then %}
+        listen {* control_plane.listen *} ssl;
+        ssl_certificate {* control_plane.cert *};
+        ssl_certificate_key {* control_plane.cert_key *};
+
+        {% if control_plane.client_ca_cert then %}
+        ssl_verify_client on;
+        ssl_client_certificate {* control_plane.client_ca_cert *};
+        {% end %}
+
+        {% else %}
         listen unix:{* home *}/conf/config_listen.sock;
+        {% end %}
+
         access_log off;
 
         set $upstream_host '';
@@ -71,17 +110,20 @@ function _M.generate_conf_server(env, conf)
         location / {
             {% if enable_https then %}
             proxy_pass https://apisix_conf_backend;
+            proxy_ssl_protocols TLSv1.2 TLSv1.3;
             proxy_ssl_server_name on;
+
             {% if sni then %}
             proxy_ssl_name {* sni *};
             {% else %}
             proxy_ssl_name $upstream_host;
             {% end %}
-            proxy_ssl_protocols TLSv1.2 TLSv1.3;
+
             {% if client_cert then %}
             proxy_ssl_certificate {* client_cert *};
             proxy_ssl_certificate_key {* client_cert_key *};
             {% end %}
+
             {% else %}
             proxy_pass http://apisix_conf_backend;
             {% end %}
@@ -89,6 +131,7 @@ function _M.generate_conf_server(env, conf)
             proxy_http_version 1.1;
             proxy_set_header Connection "";
             proxy_set_header Host $upstream_host;
+            proxy_next_upstream error timeout non_idempotent http_500 http_502 http_503 http_504;
         }
 
         log_by_lua_block {
@@ -107,11 +150,13 @@ function _M.generate_conf_server(env, conf)
     end
 
     return conf_render({
-        sni = etcd.tls and etcd.tls.sni,
-        enable_https = enable_https,
+        sni = tls and tls.sni,
         home = env.apisix_home or ".",
+        control_plane = control_plane,
+        enable_https = enable_https,
         client_cert = client_cert,
         client_cert_key = client_cert_key,
+        trusted_ca_cert = trusted_ca_cert,
     })
 end
 
index 40cf2895158b84e890978ad54f7aa80cfd740c13..e0ea91e770134457283781cb49eeef558c2e028f 100644 (file)
@@ -21,7 +21,9 @@ local balancer = require("ngx.balancer")
 local error = error
 local ipairs = ipairs
 local ngx = ngx
+local ngx_shared = ngx.shared
 local ngx_var = ngx.var
+local tonumber = tonumber
 
 
 local _M = {}
@@ -30,6 +32,16 @@ local resolved_results = {}
 local server_picker
 local has_domain = false
 
+local is_http = ngx.config.subsystem == "http"
+local health_check_shm_name = "etcd-cluster-health-check"
+if not is_http then
+    health_check_shm_name = health_check_shm_name .. "-stream"
+end
+-- an endpoint is unhealthy if it is failed for HEALTH_CHECK_MAX_FAILURE times in
+-- HEALTH_CHECK_DURATION_SECOND
+local HEALTH_CHECK_MAX_FAILURE = 3
+local HEALTH_CHECK_DURATION_SECOND = 10
+
 
 local function create_resolved_result(server)
     local host, port = core.utils.parse_addr(server)
@@ -48,6 +60,10 @@ function _M.init()
     end
 
     local etcd = conf.deployment.etcd
+    if etcd.health_check_timeout then
+        HEALTH_CHECK_DURATION_SECOND = etcd.health_check_timeout
+    end
+
     for i, s in ipairs(etcd.host) do
         local _, to = core.string.find(s, "://")
         if not to then
@@ -80,7 +96,13 @@ end
 
 
 local function response_err(err)
-    ngx.log(ngx.ERR, "failure in conf server: ", err)
+    core.log.error("failure in conf server: ", err)
+
+    if ngx.get_phase() == "balancer" then
+        return
+    end
+
+    ngx.status = 503
     ngx.say(core.json.encode({error = err}))
     ngx.exit(0)
 end
@@ -127,25 +149,87 @@ local function resolve_servers(ctx)
 end
 
 
+local function gen_unhealthy_key(addr)
+    return "conf_server:" .. addr
+end
+
+
+local function is_node_health(addr)
+    local key = gen_unhealthy_key(addr)
+    local count, err = ngx_shared[health_check_shm_name]:get(key)
+    if err then
+        core.log.warn("failed to get health check count, key: ", key, " err: ", err)
+        return true
+    end
+
+    if not count then
+        return true
+    end
+
+    return tonumber(count) < HEALTH_CHECK_MAX_FAILURE
+end
+
+
+local function report_failure(addr)
+    local key = gen_unhealthy_key(addr)
+    local count, err =
+        ngx_shared[health_check_shm_name]:incr(key, 1, 0, HEALTH_CHECK_DURATION_SECOND)
+    if not count then
+        core.log.error("failed to report failure, key: ", key, " err: ", err)
+    else
+        -- count might be larger than HEALTH_CHECK_MAX_FAILURE
+        core.log.warn("report failure, endpoint: ", addr, " count: ", count)
+    end
+end
+
+
+local function pick_node_by_server_picker(ctx)
+    local server, err = ctx.server_picker.get(ctx)
+    if not server then
+        err = err or "no valid upstream node"
+        return nil, "failed to find valid upstream server: " .. err
+    end
+
+    ctx.balancer_server = server
+
+    for _, r in ipairs(resolved_results) do
+        if r.server == server then
+            return r
+        end
+    end
+
+    return nil, "unknown server: " .. server
+end
+
+
 local function pick_node(ctx)
     local res
     if server_picker then
-        local server, err = server_picker.get(ctx)
-        if not server then
-            err = err or "no valid upstream node"
-            return nil, "failed to find valid upstream server, " .. err
+        if not ctx.server_picker then
+            ctx.server_picker = server_picker
         end
 
-        ctx.server_picker = server_picker
-        ctx.balancer_server = server
+        local err
+        res, err = pick_node_by_server_picker(ctx)
+        if not res then
+            return nil, err
+        end
+
+        while not is_node_health(res.server) do
+            core.log.warn("endpoint ", res.server, " is unhealthy, skipped")
+
+            if server_picker.after_balance then
+                server_picker.after_balance(ctx, true)
+            end
 
-        for _, r in ipairs(resolved_results) do
-            if r.server == server then
-                res = r
-                break
+            res, err = pick_node_by_server_picker(ctx)
+            if not res then
+                return nil, err
             end
         end
+
     else
+        -- we don't do health check if there is only one candidate
         res = resolved_results[1]
     end
 
@@ -153,7 +237,7 @@ local function pick_node(ctx)
     ctx.balancer_port = res.port
 
     ngx_var.upstream_host = res.domain or res.host
-    if balancer.recreate_request and ngx.get_phase() == "balancer" then
+    if ngx.get_phase() == "balancer" then
         balancer.recreate_request()
     end
 
@@ -185,6 +269,12 @@ function _M.balancer()
             core.log.warn("could not set upstream retries: ", err)
         end
     else
+        if ctx.server_picker and ctx.server_picker.after_balance then
+            ctx.server_picker.after_balance(ctx, true)
+        end
+
+        report_failure(ctx.balancer_server)
+
         local ok, err = pick_node(ctx)
         if not ok then
             return response_err(err)
index cf04e890cc8cf6c1bf984f8a55a219f0055bd86b..1c82ec3d49cd4e44b1a3c094a059697532f6db12 100644 (file)
@@ -23,20 +23,20 @@ return {
     HTTP_ETCD_DIRECTORY = {
         ["/upstreams"] = true,
         ["/plugins"] = true,
-        ["/ssl"] = true,
+        ["/ssls"] = true,
         ["/stream_routes"] = true,
         ["/plugin_metadata"] = true,
         ["/routes"] = true,
         ["/services"] = true,
         ["/consumers"] = true,
         ["/global_rules"] = true,
-        ["/proto"] = true,
+        ["/protos"] = true,
         ["/plugin_configs"] = true,
     },
     STREAM_ETCD_DIRECTORY = {
         ["/upstreams"] = true,
         ["/plugins"] = true,
-        ["/ssl"] = true,
+        ["/ssls"] = true,
         ["/stream_routes"] = true,
         ["/plugin_metadata"] = true,
     },
index bbe457cd607f0214aaa4964ce5b89aeda59f8ebe..c6d1e065041f5a96e745c60c21c2baa009c055ab 100644 (file)
@@ -276,6 +276,28 @@ function _M.dump_service_info()
     return 200, info
 end
 
+function _M.dump_all_plugin_metadata()
+    local names = core.config.local_conf().plugins
+    local metadatas = core.table.new(0, #names)
+    for _, name in ipairs(names) do
+        local metadata = plugin.plugin_metadata(name)
+        if metadata then
+            core.table.insert(metadatas, metadata.value)
+        end
+    end
+    return 200, metadatas
+end
+
+function _M.dump_plugin_metadata()
+    local uri_segs = core.utils.split_uri(ngx_var.uri)
+    local name = uri_segs[4]
+    local metadata = plugin.plugin_metadata(name)
+    if not metadata then
+        return 404, {error_msg = str_format("plugin metadata[%s] not found", name)}
+    end
+    return 200, metadata.value
+end
+
 
 return {
     -- /v1/schema
@@ -337,5 +359,17 @@ return {
         methods = {"GET"},
         uris = {"/upstream/*"},
         handler = _M.dump_upstream_info,
+    },
+    -- /v1/plugin_metadatas
+    {
+        methods = {"GET"},
+        uris = {"/plugin_metadatas"},
+        handler = _M.dump_all_plugin_metadata,
+    },
+    -- /v1/plugin_metadata/*
+    {
+        methods = {"GET"},
+        uris = {"/plugin_metadata/*"},
+        handler = _M.dump_plugin_metadata,
     }
 }
index 183c52aac33862ba5620d73fd3a47e2d182a21d7..edb05455c248c50408478e4787c576197c7221f7 100644 (file)
@@ -212,14 +212,17 @@ local function load_full_data(self, dir_res, headers)
         self:upgrade_version(item.modifiedIndex)
 
     else
-        if not dir_res.nodes then
-            dir_res.nodes = {}
+        -- here dir_res maybe res.body.node or res.body.list
+        -- we need make values equals to res.body.node.nodes or res.body.list
+        local values = (dir_res and dir_res.nodes) or dir_res
+        if not values then
+            values = {}
         end
 
-        self.values = new_tab(#dir_res.nodes, 0)
-        self.values_hash = new_tab(0, #dir_res.nodes)
+        self.values = new_tab(#values, 0)
+        self.values_hash = new_tab(0, #values)
 
-        for _, item in ipairs(dir_res.nodes) do
+        for _, item in ipairs(values) do
             local key = short_key(self, item.key)
             local data_valid = true
             if type(item.value) ~= "table" then
@@ -302,7 +305,7 @@ local function sync_data(self)
             return false, err
         end
 
-        local dir_res, headers = res.body.node or {}, res.headers
+        local dir_res, headers = res.body.list or {}, res.headers
         log.debug("readdir key: ", self.key, " res: ",
                   json.delay_encode(dir_res))
         if not dir_res then
@@ -812,18 +815,13 @@ function _M.init()
         return true
     end
 
-    local etcd_cli, err = get_etcd()
+    -- don't go through proxy during start because the proxy is not available
+    local etcd_cli, prefix, err = etcd_apisix.new_without_proxy()
     if not etcd_cli then
         return nil, "failed to start a etcd instance: " .. err
     end
 
-    -- don't go through proxy during start because the proxy is not available
-    local proxy = etcd_cli.unix_socket_proxy
-    etcd_cli.unix_socket_proxy = nil
-    local etcd_conf = local_conf.etcd
-    local prefix = etcd_conf.prefix
     local res, err = readdir(etcd_cli, prefix, create_formatter(prefix))
-    etcd_cli.unix_socket_proxy = proxy
     if not res then
         return nil, err
     end
index a57a5d0c86bf6a9e583d3dec9a0cf2eb01bd51d1..7ac08334e2d051f1283662a566335d6e97ee387f 100644 (file)
@@ -21,6 +21,7 @@
 
 local fetch_local_conf  = require("apisix.core.config_local").local_conf
 local array_mt          = require("apisix.core.json").array_mt
+local v3_adapter        = require("apisix.admin.v3_adapter")
 local etcd              = require("resty.etcd")
 local clone_tab         = require("table.clone")
 local health_check      = require("resty.etcd.health_check")
@@ -29,25 +30,63 @@ local setmetatable      = setmetatable
 local string            = string
 local tonumber          = tonumber
 local ngx_config_prefix = ngx.config.prefix()
+local ngx_socket_tcp    = ngx.socket.tcp
 
 
 local is_http = ngx.config.subsystem == "http"
 local _M = {}
 
 
--- this function create the etcd client instance used in the Admin API
+local function has_mtls_support()
+    local s = ngx_socket_tcp()
+    return s.tlshandshake ~= nil
+end
+
+
+local function _new(etcd_conf)
+    local prefix = etcd_conf.prefix
+    etcd_conf.http_host = etcd_conf.host
+    etcd_conf.host = nil
+    etcd_conf.prefix = nil
+    etcd_conf.protocol = "v3"
+    etcd_conf.api_prefix = "/v3"
+
+    -- default to verify etcd cluster certificate
+    etcd_conf.ssl_verify = true
+    if etcd_conf.tls then
+        if etcd_conf.tls.verify == false then
+            etcd_conf.ssl_verify = false
+        end
+
+        if etcd_conf.tls.cert then
+            etcd_conf.ssl_cert_path = etcd_conf.tls.cert
+            etcd_conf.ssl_key_path = etcd_conf.tls.key
+        end
+
+        if etcd_conf.tls.sni then
+            etcd_conf.sni = etcd_conf.tls.sni
+        end
+    end
+
+    local etcd_cli, err = etcd.new(etcd_conf)
+    if not etcd_cli then
+        return nil, nil, err
+    end
+
+    return etcd_cli, prefix
+end
+
+
 local function new()
     local local_conf, err = fetch_local_conf()
     if not local_conf then
         return nil, nil, err
     end
 
-    local etcd_conf
+    local etcd_conf = clone_tab(local_conf.etcd)
     local proxy_by_conf_server = false
 
     if local_conf.deployment then
-        etcd_conf = clone_tab(local_conf.deployment.etcd)
-
         if local_conf.deployment.role == "traditional"
             -- we proxy the etcd requests in traditional mode so we can test the CP's behavior in
             -- daily development. However, a stream proxy can't be the CP.
@@ -62,34 +101,33 @@ local function new()
             proxy_by_conf_server = true
 
         elseif local_conf.deployment.role == "control_plane" then
-            -- TODO: add the proxy conf in control_plane
-            proxy_by_conf_server = true
-        end
-    else
-        etcd_conf = clone_tab(local_conf.etcd)
-    end
+            local addr = local_conf.deployment.role_control_plane.conf_server.listen
+            etcd_conf.host = {"https://" .. addr}
+            etcd_conf.tls = {
+                verify = false,
+            }
 
-    local prefix = etcd_conf.prefix
-    etcd_conf.http_host = etcd_conf.host
-    etcd_conf.host = nil
-    etcd_conf.prefix = nil
-    etcd_conf.protocol = "v3"
-    etcd_conf.api_prefix = "/v3"
+            if has_mtls_support() and local_conf.deployment.certs.cert then
+                local cert = local_conf.deployment.certs.cert
+                local cert_key = local_conf.deployment.certs.cert_key
+                etcd_conf.tls.cert = cert
+                etcd_conf.tls.key = cert_key
+            end
 
-    -- default to verify etcd cluster certificate
-    etcd_conf.ssl_verify = true
-    if etcd_conf.tls then
-        if etcd_conf.tls.verify == false then
-            etcd_conf.ssl_verify = false
-        end
+            proxy_by_conf_server = true
 
-        if etcd_conf.tls.cert then
-            etcd_conf.ssl_cert_path = etcd_conf.tls.cert
-            etcd_conf.ssl_key_path = etcd_conf.tls.key
-        end
+        elseif local_conf.deployment.role == "data_plane" then
+            if has_mtls_support() and local_conf.deployment.certs.cert then
+                local cert = local_conf.deployment.certs.cert
+                local cert_key = local_conf.deployment.certs.cert_key
 
-        if etcd_conf.tls.sni then
-            etcd_conf.sni = etcd_conf.tls.sni
+                if not etcd_conf.tls then
+                    etcd_conf.tls = {}
+                end
+
+                etcd_conf.tls.cert = cert
+                etcd_conf.tls.key = cert_key
+            end
         end
     end
 
@@ -106,15 +144,28 @@ local function new()
         })
     end
 
-    local etcd_cli
-    etcd_cli, err = etcd.new(etcd_conf)
-    if not etcd_cli then
+    return _new(etcd_conf)
+end
+_M.new = new
+
+
+---
+-- Create an etcd client which will connect to etcd without being proxyed by conf server.
+-- This method is used in init_worker phase when the conf server is not ready.
+--
+-- @function core.etcd.new_without_proxy
+-- @treturn table|nil the etcd client, or nil if failed.
+-- @treturn string|nil the configured prefix of etcd keys, or nil if failed.
+-- @treturn nil|string the error message.
+function _M.new_without_proxy()
+    local local_conf, err = fetch_local_conf()
+    if not local_conf then
         return nil, nil, err
     end
 
-    return etcd_cli, prefix
+    local etcd_conf = clone_tab(local_conf.etcd)
+    return _new(etcd_conf)
 end
-_M.new = new
 
 
 -- convert ETCD v3 entry to v2 one
@@ -168,7 +219,7 @@ function _M.get_format(res, real_key, is_dir, formatter)
         return not_found(res)
     end
 
-    res.body.action = "get"
+    v3_adapter.to_v3(res.body, "get")
 
     if formatter then
         return formatter(res)
@@ -196,6 +247,7 @@ function _M.get_format(res, real_key, is_dir, formatter)
     end
 
     res.body.kvs = nil
+    v3_adapter.to_v3_list(res.body)
     return res
 end
 
@@ -269,10 +321,14 @@ local function set(key, value, ttl)
         return nil, err
     end
 
+    if res.body.error then
+        return nil, res.body.error
+    end
+
     res.headers["X-Etcd-Index"] = res.body.header.revision
 
     -- etcd v3 set would not return kv info
-    res.body.action = "set"
+    v3_adapter.to_v3(res.body, "set")
     res.body.node = {}
     res.body.node.key = prefix .. key
     res.body.node.value = value
@@ -335,7 +391,7 @@ function _M.atomic_set(key, value, ttl, mod_revision)
 
     res.headers["X-Etcd-Index"] = res.body.header.revision
     -- etcd v3 set would not return kv info
-    res.body.action = "compareAndSwap"
+    v3_adapter.to_v3(res.body, "compareAndSwap")
     res.body.node = {
         key = key,
         value = value,
@@ -373,7 +429,7 @@ function _M.push(key, value, ttl)
         return nil, err
     end
 
-    res.body.action = "create"
+    v3_adapter.to_v3(res.body, "create")
     return res, nil
 end
 
@@ -397,7 +453,7 @@ function _M.delete(key)
     end
 
     -- etcd v3 set would not return kv info
-    res.body.action = "delete"
+    v3_adapter.to_v3(res.body, "delete")
     res.body.node = {}
     res.body.key = prefix .. key
 
@@ -417,7 +473,7 @@ end
 -- --   etcdserver = "3.5.0"
 -- -- }
 function _M.server_version()
-    local etcd_cli, err = new()
+    local etcd_cli, _, err = new()
     if not etcd_cli then
         return nil, err
     end
index 2a6bb47ced6119a41aeff4fdc92226f81de62a9a..b307cc25d544b0b55e90795c0ad07209c9ec0f1d 100644 (file)
@@ -91,6 +91,10 @@ end
 -- local arr = {"a", "b", "c"}
 -- local idx = core.table.array_find(arr, "b") -- idx = 2
 function _M.array_find(array, val)
+    if type(array) ~= "table" then
+        return nil
+    end
+
     for i, v in ipairs(array) do
         if v == val then
             return i
index 3b0e34726fe46e973f9f86df27dd7bf712b4a642..242b667a772a9687d32b68a742d72b977d4d56cf 100644 (file)
@@ -20,5 +20,5 @@
 -- @module core.version
 
 return {
-    VERSION = "2.14.1"
+    VERSION = "2.15.0"
 }
index a03f27a5ac682822d93955d2a26096a6b956e52a..3dca064039fbcde3a48529ae1377a4578c4163f5 100644 (file)
@@ -263,6 +263,9 @@ local function list_watch(informer, apiserver)
     local reason, message
     local httpc = http.new()
 
+    informer.continue = ""
+    informer.version = ""
+
     informer.fetch_state = "connecting"
     core.log.info("begin to connect ", apiserver.host, ":", apiserver.port)
 
index 25d9d5aa2bfbf7c52ad45c2dc3b59ff4e4be3965..17f1740ef47bbb79455101c33436f54b697f01df 100644 (file)
@@ -41,7 +41,6 @@ local apisix_ssl      = require("apisix.ssl")
 local upstream_util   = require("apisix.utils.upstream")
 local xrpc            = require("apisix.stream.xrpc")
 local ctxdump         = require("resty.ctxdump")
-local ngx_balancer    = require("ngx.balancer")
 local debug           = require("apisix.debug")
 local pubsub_kafka    = require("apisix.pubsub.kafka")
 local ngx             = ngx
@@ -219,10 +218,7 @@ local function set_upstream_host(api_ctx, picked_server)
         return
     end
 
-    local nodes_count = up_conf.nodes and #up_conf.nodes or 0
-    if nodes_count == 1 or ngx_balancer.recreate_request then
-        api_ctx.var.upstream_host = picked_server.upstream_host
-    end
+    api_ctx.var.upstream_host = picked_server.upstream_host
 end
 
 
index d8f4d538c83dd2611dbd2b8e7a24a8ad84133119..a624a56961c6ef44b5979e2e40399f8f8196162c 100644 (file)
@@ -19,6 +19,7 @@ local core          = require("apisix.core")
 local config_util   = require("apisix.core.config_util")
 local enable_debug  = require("apisix.debug").enable_debug
 local wasm          = require("apisix.wasm")
+local expr          = require("resty.expr.v1")
 local ngx           = ngx
 local crc32         = ngx.crc32_short
 local ngx_exit      = ngx.exit
@@ -40,6 +41,9 @@ local stream_local_plugins_hash = core.table.new(0, 32)
 local merged_route = core.lrucache.new({
     ttl = 300, count = 512
 })
+local expr_lrucache = core.lrucache.new({
+    ttl = 300, count = 512
+})
 local local_conf
 local check_plugin_metadata
 
@@ -272,7 +276,7 @@ local function load_stream(plugin_names)
 end
 
 
-function _M.load(config)
+local function get_plugin_names(config)
     local http_plugin_names
     local stream_plugin_names
 
@@ -294,7 +298,7 @@ function _M.load(config)
         local plugins_conf = config.value
         -- plugins_conf can be nil when another instance writes into etcd key "/apisix/plugins/"
         if not plugins_conf then
-            return local_plugins
+            return true
         end
 
         for _, conf in ipairs(plugins_conf) do
@@ -306,6 +310,16 @@ function _M.load(config)
         end
     end
 
+    return false, http_plugin_names, stream_plugin_names
+end
+
+
+function _M.load(config)
+    local ignored, http_plugin_names, stream_plugin_names = get_plugin_names(config)
+    if ignored then
+        return local_plugins
+    end
+
     if ngx.config.subsystem == "http" then
         if not http_plugin_names then
             core.log.error("failed to read plugin list from local file")
@@ -371,6 +385,32 @@ local function trace_plugins_info_for_debug(ctx, plugins)
     end
 end
 
+local function meta_filter(ctx, plugin_name, plugin_conf)
+    local filter = plugin_conf._meta and plugin_conf._meta.filter
+    if not filter then
+        return true
+    end
+
+    local ex, ok, err
+    if ctx then
+        ex, err = expr_lrucache(plugin_name .. ctx.conf_type .. ctx.conf_id,
+                                 ctx.conf_version, expr.new, filter)
+    else
+        ex, err = expr.new(filter)
+    end
+    if not ex then
+        core.log.warn("failed to get the 'vars' expression: ", err ,
+                         " plugin_name: ", plugin_name)
+        return true
+    end
+    ok, err = ex:eval()
+    if err then
+        core.log.warn("failed to run the 'vars' expression: ", err,
+                         " plugin_name: ", plugin_name)
+        return true
+    end
+    return ok
+end
 
 function _M.filter(ctx, conf, plugins, route_conf, phase)
     local user_plugin_conf = conf.value.plugins
@@ -389,7 +429,12 @@ function _M.filter(ctx, conf, plugins, route_conf, phase)
         local name = plugin_obj.name
         local plugin_conf = user_plugin_conf[name]
 
-        if type(plugin_conf) == "table" and not plugin_conf.disable then
+        if type(plugin_conf) ~= "table" then
+            goto continue
+        end
+
+        local matched = meta_filter(ctx, name, plugin_conf)
+        if not plugin_conf.disable and matched then
             if plugin_obj.run_policy == "prefer_route" and route_plugin_conf ~= nil then
                 local plugin_conf_in_route = route_plugin_conf[name]
                 if plugin_conf_in_route and not plugin_conf_in_route.disable then
@@ -402,9 +447,9 @@ function _M.filter(ctx, conf, plugins, route_conf, phase)
             end
             core.table.insert(plugins, plugin_obj)
             core.table.insert(plugins, plugin_conf)
-
-            ::continue::
         end
+
+        ::continue::
     end
 
     trace_plugins_info_for_debug(ctx, plugins)
@@ -620,16 +665,21 @@ end
 
 
 function _M.init_worker()
-    _M.load()
+    local _, http_plugin_names, stream_plugin_names = get_plugin_names()
 
     -- some plugins need to be initialized in init* phases
-    if is_http and local_plugins_hash["prometheus"] then
-        local prometheus_enabled_in_stream = stream_local_plugins_hash["prometheus"]
+    if is_http and core.table.array_find(http_plugin_names, "prometheus") then
+        local prometheus_enabled_in_stream =
+            core.table.array_find(stream_plugin_names, "prometheus")
         require("apisix.plugins.prometheus.exporter").http_init(prometheus_enabled_in_stream)
-    elseif not is_http and stream_local_plugins_hash["prometheus"] then
+    elseif not is_http and core.table.array_find(stream_plugin_names, "prometheus") then
         require("apisix.plugins.prometheus.exporter").stream_init()
     end
 
+    -- someone's plugin needs to be initialized after prometheus
+    -- see https://github.com/apache/apisix/issues/3286
+    _M.load()
+
     if local_conf and not local_conf.apisix.enable_admin then
         init_plugins_syncer()
     end
@@ -720,6 +770,13 @@ local function check_single_plugin_schema(name, plugin_conf, schema_type, skip_d
                 .. name .. " err: " .. err
         end
 
+        if plugin_conf._meta and plugin_conf._meta.filter then
+            ok, err = expr.new(plugin_conf._meta.filter)
+            if not ok then
+                return nil, "failed to validate the 'vars' expression: " .. err
+            end
+        end
+
         plugin_conf.disable = disable
     end
 
@@ -824,13 +881,17 @@ function _M.run_plugin(phase, plugins, api_ctx)
         and phase ~= "delayed_body_filter"
     then
         for i = 1, #plugins, 2 do
-            if phase == "rewrite_in_consumer" and plugins[i + 1]._from_consumer
-                    and plugins[i].type ~= "auth"then
-                phase = "rewrite"
+            local phase_func
+            if phase == "rewrite_in_consumer" then
+                if plugins[i].type == "auth" then
+                    plugins[i + 1]._skip_rewrite_in_consumer = true
+                end
+                phase_func = plugins[i]["rewrite"]
+            else
+                phase_func = plugins[i][phase]
             end
-            local phase_func = plugins[i][phase]
 
-            if phase == "rewrite" and plugins[i + 1]._skip_rewrite_in_consumer then
+            if phase == "rewrite_in_consumer" and plugins[i + 1]._skip_rewrite_in_consumer then
                 goto CONTINUE
             end
 
index 903ea6ec1913288e4cd93ba8b702d05fcffce80f..cc5a6ff38456d0f64289439c5c03200f495e632e 100644 (file)
@@ -65,7 +65,9 @@ function _M.merge(route_conf, plugin_config)
     route_conf.value.plugins = core.table.clone(route_conf.value.plugins)
 
     for name, value in pairs(plugin_config.value.plugins) do
-        route_conf.value.plugins[name] = value
+        if not route_conf.value.plugins[name] then
+            route_conf.value.plugins[name] = value
+        end
     end
 
     route_conf.update_count = route_conf.update_count + 1
index f7b734645334f81011b204196354c1c5a877b3e3..026f0cfa93da8a609fc1b47cfe42ab1045d449ac 100644 (file)
@@ -21,6 +21,7 @@ local core            = require("apisix.core")
 local http            = require("resty.http")
 local url             = require("net.url")
 local plugin          = require("apisix.plugin")
+local math_random     = math.random
 
 local ngx      = ngx
 local tostring = tostring
@@ -31,7 +32,9 @@ local batch_processor_manager = bp_manager_mod.new(plugin_name)
 local schema = {
     type = "object",
     properties = {
+        -- deprecated, use "endpoint_addrs" instead
         endpoint_addr = core.schema.uri_def,
+        endpoint_addrs = {items = core.schema.uri_def, type = "array", minItems = 1},
         user = {type = "string", default = ""},
         password = {type = "string", default = ""},
         database = {type = "string", default = ""},
@@ -40,7 +43,10 @@ local schema = {
         name = {type = "string", default = "clickhouse logger"},
         ssl_verify = {type = "boolean", default = true},
     },
-    required = {"endpoint_addr", "user", "password", "database", "logtable"}
+    oneOf = {
+        {required = {"endpoint_addr", "user", "password", "database", "logtable"}},
+        {required = {"endpoint_addrs", "user", "password", "database", "logtable"}}
+    },
 }
 
 
@@ -72,11 +78,17 @@ end
 local function send_http_data(conf, log_message)
     local err_msg
     local res = true
-    local url_decoded = url.parse(conf.endpoint_addr)
+    local selected_endpoint_addr
+    if conf.endpoint_addr then
+        selected_endpoint_addr = conf.endpoint_addr
+    else
+        selected_endpoint_addr = conf.endpoint_addrs[math_random(#conf.endpoint_addrs)]
+    end
+    local url_decoded = url.parse(selected_endpoint_addr)
     local host = url_decoded.host
     local port = url_decoded.port
 
-    core.log.info("sending a batch logs to ", conf.endpoint_addr)
+    core.log.info("sending a batch logs to ", selected_endpoint_addr)
 
     if not port then
         if url_decoded.scheme == "https" then
index 7da62a805fdd10e4e16d0c2f7776b21188a3a72a..2405d33ec764e53b9011876f63f7de02633fc113 100644 (file)
@@ -77,15 +77,24 @@ local schema = {
     required = { "proto_id", "service", "method" },
 }
 
+-- Based on https://cloud.google.com/apis/design/errors#handling_errors
 local status_rel = {
-    ["3"] = 400,
-    ["4"] = 504,
-    ["5"] = 404,
-    ["7"] = 403,
-    ["11"] = 416,
-    ["12"] = 501,
-    ["13"] = 500,
-    ["14"] = 503,
+    ["1"] = 499,    -- CANCELLED
+    ["2"] = 500,    -- UNKNOWN
+    ["3"] = 400,    -- INVALID_ARGUMENT
+    ["4"] = 504,    -- DEADLINE_EXCEEDED
+    ["5"] = 404,    -- NOT_FOUND
+    ["6"] = 409,    -- ALREADY_EXISTS
+    ["7"] = 403,    -- PERMISSION_DENIED
+    ["8"] = 429,    -- RESOURCE_EXHAUSTED
+    ["9"] = 400,    -- FAILED_PRECONDITION
+    ["10"] = 409,   -- ABORTED
+    ["11"] = 400,   -- OUT_OF_RANGE
+    ["12"] = 501,   -- UNIMPLEMENTED
+    ["13"] = 500,   -- INTERNAL
+    ["14"] = 503,   -- UNAVAILABLE
+    ["15"] = 500,   -- DATA_LOSS
+    ["16"] = 401,   -- UNAUTHENTICATED
 }
 
 local _M = {
index c30c17e71855c93ce33437b643e9d2bbedb2e2f5..c2a3cb523394776b2cafa38681fc1e145ff0133a 100644 (file)
@@ -159,7 +159,7 @@ end
 
 function _M.init()
     local err
-    protos, err = core.config.new("/proto", {
+    protos, err = core.config.new("/protos", {
         automatic = true,
         item_schema = core.schema.proto
     })
index 3d3ebdfb4e2dad7a2eea534d053018f4735d1dc4..93cd8c9bef3baa17a3cc2eb44e9783b82d5bffc2 100644 (file)
@@ -33,7 +33,7 @@ local schema = {
     type = "object",
     properties = {
         uri = core.schema.uri_def,
-        auth_header = {type = "string", default = ""},
+        auth_header = {type = "string"},
         timeout = {type = "integer", minimum = 1, default = 3},
         include_req_body = {type = "boolean", default = false},
         include_resp_body = {type = "boolean", default = false},
index 82c12c95b2c54d4b3847915823ca7d3a01924bd8..36006975f5d3fe24815ecaa505bcd034e12534ae 100644 (file)
@@ -60,7 +60,7 @@ local consumer_schema = {
         secret = {type = "string"},
         algorithm = {
             type = "string",
-            enum = {"HS256", "HS512", "RS256"},
+            enum = {"HS256", "HS512", "RS256", "ES256"},
             default = "HS256"
         },
         exp = {type = "integer", minimum = 1, default = 86400},
@@ -71,6 +71,11 @@ local consumer_schema = {
         vault = {
             type = "object",
             properties = {}
+        },
+        lifetime_grace_period = {
+            type = "integer",
+            minimum = 0,
+            default = 0
         }
     },
     dependencies = {
@@ -89,7 +94,7 @@ local consumer_schema = {
                         public_key = {type = "string"},
                         private_key= {type = "string"},
                         algorithm = {
-                            enum = {"RS256"},
+                            enum = {"RS256", "ES256"},
                         },
                     },
                     required = {"public_key", "private_key"},
@@ -101,7 +106,7 @@ local consumer_schema = {
                             properties = {}
                         },
                         algorithm = {
-                            enum = {"RS256"},
+                            enum = {"RS256", "ES256"},
                         },
                     },
                     required = {"vault"},
@@ -161,7 +166,7 @@ function _M.check_schema(conf, schema_type)
         return true
     end
 
-    if conf.algorithm ~= "RS256" and not conf.secret then
+    if conf.algorithm ~= "RS256" and conf.algorithm ~= "ES256" and not conf.secret then
         conf.secret = ngx_encode_base64(resty_random.bytes(32, true))
     elseif conf.base64_secret then
         if ngx_decode_base64(conf.secret) == nil then
@@ -169,7 +174,7 @@ function _M.check_schema(conf, schema_type)
         end
     end
 
-    if conf.algorithm == "RS256" then
+    if conf.algorithm == "RS256" or conf.algorithm == "ES256" then
         -- Possible options are a) both are in vault, b) both in schema
         -- c) one in schema, another in vault.
         if not conf.public_key then
@@ -235,7 +240,7 @@ local function get_secret(conf, consumer_name)
 end
 
 
-local function get_rsa_keypair(conf, consumer_name)
+local function get_rsa_or_ecdsa_keypair(conf, consumer_name)
     local public_key = conf.public_key
     local private_key = conf.private_key
     -- if keys are present in conf, no need to query vault (fallback)
@@ -304,8 +309,10 @@ local function sign_jwt_with_HS(key, consumer, payload)
 end
 
 
-local function sign_jwt_with_RS256(key, consumer, payload)
-    local public_key, private_key, err = get_rsa_keypair(consumer.auth_conf, consumer.username)
+local function sign_jwt_with_RS256_ES256(key, consumer, payload)
+    local public_key, private_key, err = get_rsa_or_ecdsa_keypair(
+        consumer.auth_conf, consumer.username
+    )
     if not public_key then
         core.log.error("failed to sign jwt, err: ", err)
         core.response.exit(503, "failed to sign jwt")
@@ -340,12 +347,12 @@ local function algorithm_handler(consumer, method_only)
         end
 
         return get_secret(consumer.auth_conf, consumer.username)
-    elseif consumer.auth_conf.algorithm == "RS256" then
+    elseif consumer.auth_conf.algorithm == "RS256" or consumer.auth_conf.algorithm == "ES256"  then
         if method_only then
-            return sign_jwt_with_RS256
+            return sign_jwt_with_RS256_ES256
         end
 
-        local public_key, _, err = get_rsa_keypair(consumer.auth_conf, consumer.username)
+        local public_key, _, err = get_rsa_or_ecdsa_keypair(consumer.auth_conf, consumer.username)
         return public_key, err
     end
 end
@@ -389,7 +396,10 @@ function _M.rewrite(conf, ctx)
         core.log.error("failed to retrieve secrets, err: ", err)
         return 503, {message = "failed to verify jwt"}
     end
-    jwt_obj = jwt:verify_jwt_obj(auth_secret, jwt_obj)
+    local claim_specs = jwt:get_default_validation_options(jwt_obj)
+    claim_specs.lifetime_grace_period = consumer.auth_conf.lifetime_grace_period
+
+    jwt_obj = jwt:verify_jwt_obj(auth_secret, jwt_obj, claim_specs)
     core.log.info("jwt object: ", core.json.delay_encode(jwt_obj))
 
     if not jwt_obj.verified then
index 3fce91141119db3fdd05ecb4fd32e405f636f4ff..d155696b63379ff716d6a5706f3abb0ae0d33345 100644 (file)
@@ -19,7 +19,7 @@ local ngx = ngx
 local ngx_re = require("ngx.re")
 local ipairs = ipairs
 local consumer_mod = require("apisix.consumer")
-local lualdap = require("lualdap")
+local ldap = require("resty.ldap")
 
 local lrucache = core.lrucache.new({
     ttl = 300, count = 512
@@ -31,8 +31,9 @@ local schema = {
     properties = {
         base_dn = { type = "string" },
         ldap_uri = { type = "string" },
-        use_tls = { type = "boolean" },
-        uid = { type = "string" }
+        use_tls = { type = "boolean", default = false },
+        tls_verify = { type = "boolean", default = false },
+        uid = { type = "string", default = "cn" }
     },
     required = {"base_dn","ldap_uri"},
 }
@@ -136,11 +137,23 @@ function _M.rewrite(conf, ctx)
     end
 
     -- 2. try authenticate the user against the ldap server
-    local uid = conf.uid or "cn"
-
-    local userdn =  uid .. "=" .. user.username .. "," .. conf.base_dn
-    local ld = lualdap.open_simple (conf.ldap_uri, userdn, user.password, conf.use_tls)
-    if not ld then
+    local ldap_host, ldap_port = core.utils.parse_addr(conf.ldap_uri)
+
+    local userdn =  conf.uid .. "=" .. user.username .. "," .. conf.base_dn
+    local ldapconf = {
+        timeout = 10000,
+        start_tls = false,
+        ldap_host = ldap_host,
+        ldap_port = ldap_port or 389,
+        ldaps = conf.use_tls,
+        tls_verify = conf.tls_verify,
+        base_dn = conf.base_dn,
+        attribute = conf.uid,
+        keepalive = 60000,
+    }
+    local res, err = ldap.ldap_authenticate(user.username, user.password, ldapconf)
+    if not res then
+        core.log.warn("ldap-auth failed: ", err)
         return 401, { message = "Invalid user authorization" }
     end
 
index 746e474b93d011b0d59238b5a08997876ab84282..e191b62232bfd280d238f564748b0fcbee6e7e7a 100644 (file)
@@ -129,6 +129,7 @@ local schema = {
     }
 }
 
+local schema_copy = core.table.deepcopy(schema)
 
 local _M = {
     version = 0.4,
@@ -151,7 +152,10 @@ function _M.check_schema(conf)
 
     if conf.group then
         local fields = {}
-        for k in pairs(schema.properties) do
+        -- When the goup field is configured,
+        -- we will use schema_copy to get the whitelist of properties,
+        -- so that we can avoid getting injected properties.
+        for k in pairs(schema_copy.properties) do
             tab_insert(fields, k)
         end
         local extra = policy_to_additional_properties[conf.policy]
index 4a6dbda1ccec425f4e149c626d49765a817e2ca7..b472feca0159b88e07c91ec38c4afbe1d66abaf6 100644 (file)
@@ -73,6 +73,11 @@ local schema = {
         },
         public_key = {type = "string"},
         token_signing_alg_values_expected = {type = "string"},
+        use_pkce = {
+            description = "when set to true the PKEC(Proof Key for Code Exchange) will be used.",
+            type = "boolean",
+            default = false
+        },
         set_access_token_header = {
             description = "Whether the access token should be added as a header to the request " ..
                 "for downstream",
index c65a39c48ba2ab5f0fefc9ec1fe2c36c29424e55..45ff94c3f631fc47fb1c0cd7ee18323c9f732fd7 100644 (file)
@@ -18,6 +18,7 @@ local base_prometheus = require("prometheus")
 local core      = require("apisix.core")
 local plugin    = require("apisix.plugin")
 local ipairs    = ipairs
+local pairs     = pairs
 local ngx       = ngx
 local re_gmatch = ngx.re.gmatch
 local ffi       = require("ffi")
@@ -38,6 +39,8 @@ local get_protos = require("apisix.plugins.grpc-transcode.proto").protos
 local service_fetch = require("apisix.http.service").get
 local latency_details = require("apisix.utils.log-util").latency_details_in_ms
 local xrpc = require("apisix.stream.xrpc")
+local unpack = unpack
+local next = next
 
 
 local ngx_capture
@@ -64,6 +67,31 @@ local function gen_arr(...)
     return inner_tab_arr
 end
 
+local extra_labels_tbl = {}
+
+local function extra_labels(name, ctx)
+    clear_tab(extra_labels_tbl)
+
+    local attr = plugin.plugin_attr("prometheus")
+    local metrics = attr.metrics
+
+    if metrics and metrics[name] and metrics[name].extra_labels then
+        local labels = metrics[name].extra_labels
+        for _, kv in ipairs(labels) do
+            local val, v = next(kv)
+            if ctx then
+                val = ctx.var[v:sub(2)]
+                if val == nil then
+                    val = ""
+                end
+            end
+            core.table.insert(extra_labels_tbl, val)
+        end
+    end
+
+    return extra_labels_tbl
+end
+
 
 local _M = {}
 
@@ -122,6 +150,14 @@ function _M.http_init(prometheus_enabled_in_stream)
             "Etcd modify index for APISIX keys",
             {"key"})
 
+    metrics.shared_dict_capacity_bytes = prometheus:gauge("shared_dict_capacity_bytes",
+            "The capacity of each nginx shared DICT since APISIX start",
+            {"name"})
+
+    metrics.shared_dict_free_space_bytes = prometheus:gauge("shared_dict_free_space_bytes",
+            "The free space of each nginx shared DICT since APISIX start",
+            {"name"})
+
     -- per service
 
     -- The consumer label indicates the name of consumer corresponds to the
@@ -129,15 +165,17 @@ function _M.http_init(prometheus_enabled_in_stream)
     -- no consumer in request.
     metrics.status = prometheus:counter("http_status",
             "HTTP status codes per service in APISIX",
-            {"code", "route", "matched_uri", "matched_host", "service", "consumer", "node"})
+            {"code", "route", "matched_uri", "matched_host", "service", "consumer", "node",
+            unpack(extra_labels("http_status"))})
 
     metrics.latency = prometheus:histogram("http_latency",
         "HTTP request latency in milliseconds per service in APISIX",
-        {"type", "route", "service", "consumer", "node"}, DEFAULT_BUCKETS)
+        {"type", "route", "service", "consumer", "node", unpack(extra_labels("http_latency"))},
+        DEFAULT_BUCKETS)
 
     metrics.bandwidth = prometheus:counter("bandwidth",
             "Total bandwidth in bytes consumed per service in APISIX",
-            {"type", "route", "service", "consumer", "node"})
+            {"type", "route", "service", "consumer", "node", unpack(extra_labels("bandwidth"))})
 
     if prometheus_enabled_in_stream then
         init_stream_metrics()
@@ -199,25 +237,35 @@ function _M.http_log(conf, ctx)
 
     metrics.status:inc(1,
         gen_arr(vars.status, route_id, matched_uri, matched_host,
-                service_id, consumer_name, balancer_ip))
+                service_id, consumer_name, balancer_ip,
+                unpack(extra_labels("http_status", ctx))))
 
     local latency, upstream_latency, apisix_latency = latency_details(ctx)
+    local latency_extra_label_values = extra_labels("http_latency", ctx)
+
     metrics.latency:observe(latency,
-        gen_arr("request", route_id, service_id, consumer_name, balancer_ip))
+        gen_arr("request", route_id, service_id, consumer_name, balancer_ip,
+        unpack(latency_extra_label_values)))
 
     if upstream_latency then
         metrics.latency:observe(upstream_latency,
-            gen_arr("upstream", route_id, service_id, consumer_name, balancer_ip))
+            gen_arr("upstream", route_id, service_id, consumer_name, balancer_ip,
+            unpack(latency_extra_label_values)))
     end
 
     metrics.latency:observe(apisix_latency,
-        gen_arr("apisix", route_id, service_id, consumer_name, balancer_ip))
+        gen_arr("apisix", route_id, service_id, consumer_name, balancer_ip,
+        unpack(latency_extra_label_values)))
+
+    local bandwidth_extra_label_values = extra_labels("bandwidth", ctx)
 
     metrics.bandwidth:inc(vars.request_length,
-        gen_arr("ingress", route_id, service_id, consumer_name, balancer_ip))
+        gen_arr("ingress", route_id, service_id, consumer_name, balancer_ip,
+        unpack(bandwidth_extra_label_values)))
 
     metrics.bandwidth:inc(vars.bytes_sent,
-        gen_arr("egress", route_id, service_id, consumer_name, balancer_ip))
+        gen_arr("egress", route_id, service_id, consumer_name, balancer_ip,
+        unpack(bandwidth_extra_label_values)))
 end
 
 
@@ -352,6 +400,16 @@ local function etcd_modify_index()
 end
 
 
+local function shared_dict_status()
+    local name = {}
+    for shared_dict_name, shared_dict in pairs(ngx.shared) do
+        name[1] = shared_dict_name
+        metrics.shared_dict_capacity_bytes:set(shared_dict:capacity(), name)
+        metrics.shared_dict_free_space_bytes:set(shared_dict:free_space(), name)
+    end
+end
+
+
 local function collect(ctx, stream_only)
     if not prometheus or not metrics then
         core.log.error("prometheus: plugin is not initialized, please make sure ",
@@ -359,6 +417,9 @@ local function collect(ctx, stream_only)
         return 500, {message = "An unexpected error occurred"}
     end
 
+    -- collect ngx.shared.DICT status
+    shared_dict_status()
+
     -- across all services
     nginx_status()
 
index c1d7ec4f5d549f22ab56f5d2657f964117025c2a..f53918003f95dc4d18b4eac528d53dce177cc5d5 100644 (file)
@@ -78,6 +78,11 @@ local schema = {
             type = "object",
             minProperties = 1,
         },
+        use_real_request_uri_unsafe = {
+            description = "use real_request_uri instead, THIS IS VERY UNSAFE.",
+            type        = "boolean",
+            default     = false,
+        },
     },
     minProperties = 1,
 }
@@ -161,7 +166,9 @@ function _M.rewrite(conf, ctx)
     end
 
     local upstream_uri = ctx.var.uri
-    if conf.uri ~= nil then
+    if conf.use_real_request_uri_unsafe then
+        upstream_uri = ctx.var.real_request_uri
+    elseif conf.uri ~= nil then
         upstream_uri = core.utils.resolve_var(conf.uri, ctx.var)
     elseif conf.regex_uri ~= nil then
         local uri, _, err = re_sub(ctx.var.uri, conf.regex_uri[1],
@@ -177,22 +184,24 @@ function _M.rewrite(conf, ctx)
         end
     end
 
-    local index = str_find(upstream_uri, "?")
-    if index then
-        upstream_uri = core.utils.uri_safe_encode(sub_str(upstream_uri, 1, index-1)) ..
-                       sub_str(upstream_uri, index)
-    else
-        upstream_uri = core.utils.uri_safe_encode(upstream_uri)
-    end
-
-    if ctx.var.is_args == "?" then
+    if not conf.use_real_request_uri_unsafe then
+        local index = str_find(upstream_uri, "?")
         if index then
-            ctx.var.upstream_uri = upstream_uri .. "&" .. (ctx.var.args or "")
+            upstream_uri = core.utils.uri_safe_encode(sub_str(upstream_uri, 1, index-1)) ..
+                           sub_str(upstream_uri, index)
+        else
+            upstream_uri = core.utils.uri_safe_encode(upstream_uri)
+        end
+
+        if ctx.var.is_args == "?" then
+            if index then
+                ctx.var.upstream_uri = upstream_uri .. "&" .. (ctx.var.args or "")
+            else
+                ctx.var.upstream_uri = upstream_uri .. "?" .. (ctx.var.args or "")
+            end
         else
-            ctx.var.upstream_uri = upstream_uri .. "?" .. (ctx.var.args or "")
+            ctx.var.upstream_uri = upstream_uri
         end
-    else
-        ctx.var.upstream_uri = upstream_uri
     end
 
     if conf.headers then
index 6c9a99a1575ca3fecb13201b84a653e429143f88..d858b9c86df99483f4968fbfbba977a19e0cfaaa 100644 (file)
@@ -101,6 +101,7 @@ end
 
 function _M.check_schema(conf)
     local ok, err = core.schema.check(schema, conf)
+
     if not ok then
         return false, err
     end
@@ -115,6 +116,10 @@ function _M.check_schema(conf)
         end
     end
 
+    if conf.http_to_https and conf.append_query_string then
+        return false, "only one of `http_to_https` and `append_query_string` can be configured."
+    end
+
     return true
 end
 
@@ -192,8 +197,6 @@ function _M.rewrite(conf, ctx)
     local proxy_proto = core.request.header(ctx, "X-Forwarded-Proto")
     local _scheme = proxy_proto or core.request.get_scheme(ctx)
     if conf.http_to_https and _scheme == "http" then
-        -- TODO: add test case
-        -- PR: https://github.com/apache/apisix/pull/1958
         if ret_port == nil or ret_port == 443 or ret_port <= 0 or ret_port > 65535  then
             uri = "https://$host$request_uri"
         else
index 1a2e9867fce0fc5e336f9c1052a38e72f81d6854..a6be6474cd9b7e7f17bfe28ab3b2d524fb6d2b22 100644 (file)
@@ -232,7 +232,7 @@ local function check_url_permission(server, appid, action, resName, client_ip, w
         }
     end
 
-    if res.status ~= 200 and res.status ~= 401 then
+    if res.status ~= 200 and res.status >= 500 then
         return {
             status = 500,
             err = 'request to wolf-server failed, status:' .. res.status
@@ -314,7 +314,7 @@ function _M.rewrite(conf, ctx)
         core.response.set_header(prefix .. "UserId", userId)
         core.response.set_header(prefix .. "Username", username)
         core.response.set_header(prefix .. "Nickname", ngx.escape_uri(nickname))
-        core.request.set_header(ctx, prefix .. "UserId", userId, ctx)
+        core.request.set_header(ctx, prefix .. "UserId", userId)
         core.request.set_header(ctx, prefix .. "Username", username)
         core.request.set_header(ctx, prefix .. "Nickname", ngx.escape_uri(nickname))
     end
@@ -324,9 +324,7 @@ function _M.rewrite(conf, ctx)
         core.log.error(" check_url_permission(",
             core.json.delay_encode(perm_item),
             ") failed, res: ",core.json.delay_encode(res))
-        return 401, fail_response("Invalid user permission",
-            { username = username, nickname = nickname }
-        )
+        return res.status, fail_response(res.err, { username = username, nickname = nickname })
     end
     core.log.info("wolf-rbac check permission passed")
 end
index 16dccc6d8fa2614279ad7fc82d63d47991f577fc..3d721a4a432f6990a428913e8217aa8012759227 100644 (file)
@@ -956,6 +956,11 @@ _M.plugin_injected_schema = {
                 description = "priority of plugins by customized order",
                 type = "integer",
             },
+            filter = {
+                description = "filter determines whether the plugin "..
+                                "needs to be executed at runtime",
+                type  = "array",
+            }
         }
     }
 }
index 891d8d21dd4c35a079123d3bab0aba7ae60bdfa1..28648f8c9b18b0c715f6ed96e5c0e84c271d9099 100644 (file)
@@ -247,7 +247,7 @@ end
 
 function _M.init_worker()
     local err
-    ssl_certificates, err = core.config.new("/ssl", {
+    ssl_certificates, err = core.config.new("/ssls", {
         automatic = true,
         item_schema = core.schema.ssl,
         checker = function (item, schema_type)
@@ -264,7 +264,7 @@ end
 
 function _M.get_by_id(ssl_id)
     local ssl
-    local ssls = core.config.fetch_created_obj("/ssl")
+    local ssls = core.config.fetch_created_obj("/ssls")
     if ssls then
         ssl = ssls:get(tostring(ssl_id))
     end
index 0162ad8137ed01c4f017103f647b527e6345f4d6..a2a0cd3e899ab7103e53ff57bf3228e0f78a7664 100644 (file)
@@ -19,7 +19,6 @@ local core = require("apisix.core")
 local discovery = require("apisix.discovery.init").discovery
 local upstream_util = require("apisix.utils.upstream")
 local apisix_ssl = require("apisix.ssl")
-local balancer = require("ngx.balancer")
 local error = error
 local tostring = tostring
 local ipairs = ipairs
@@ -430,7 +429,7 @@ local function check_upstream_conf(in_dp, conf)
 
         local ssl_id = conf.tls and conf.tls.client_cert_id
         if ssl_id then
-            local key = "/ssl/" .. ssl_id
+            local key = "/ssls/" .. ssl_id
             local res, err = core.etcd.get(key)
             if not res then
                 return nil, "failed to fetch ssl info by "
@@ -458,12 +457,6 @@ local function check_upstream_conf(in_dp, conf)
     end
 
     if is_http then
-        if conf.pass_host == "node" and conf.nodes and
-            not balancer.recreate_request and core.table.nkeys(conf.nodes) ~= 1
-        then
-            return false, "only support single node for `node` mode currently"
-        end
-
         if conf.pass_host == "rewrite" and
             (conf.upstream_host == nil or conf.upstream_host == "")
         then
index 4583fd1b52a01b5390da4246b11208da4b9591ae..780764ae9509ee6a3e2bae283e1f65e9f7518b4a 100755 (executable)
@@ -42,10 +42,6 @@ if [[ -e $OR_EXEC && "$OR_VER" -ge 119 ]]; then
     # use the luajit of openresty
     echo "$LUAJIT_BIN $APISIX_LUA $*"
     exec $LUAJIT_BIN $APISIX_LUA $*
-elif [[ "$LUA_VERSION" =~ "Lua 5.1" ]]; then
-    # OpenResty version is < 1.19, use Lua 5.1 by default
-    echo "lua $APISIX_LUA $*"
-    exec lua $APISIX_LUA $*
 else
-    echo "ERROR: Please check the version of OpenResty and Lua, OpenResty 1.19+ + LuaJIT or OpenResty before 1.19 + Lua 5.1 is required for Apache APISIX."
+    echo "ERROR: Please check the version of OpenResty and Lua, OpenResty 1.19+ + LuaJIT is required for Apache APISIX."
 fi
similarity index 96%
rename from ci/linux_openresty_1_17_runner.sh
rename to ci/linux_openresty_1_19_runner.sh
index b0cbde775e2d8245e97b9cfdefa2151440e2eaac..ed17513089264d022b93fb3222748c5902dc7ffe 100755 (executable)
@@ -17,5 +17,5 @@
 #
 
 
-export OPENRESTY_VERSION=1.17.8.2
+export OPENRESTY_VERSION=1.19.3.2
 . ./ci/linux_openresty_common_runner.sh
index d0350860096b6c7d9354b7cd6e22f230a2813904..226abd7ed204aac9d0a49f9140a2d32be31fe151 100644 (file)
@@ -126,13 +126,19 @@ services:
   openldap:
     image: bitnami/openldap:2.5.8
     environment:
-      LDAP_ADMIN_USERNAME: amdin
-      LDAP_ADMIN_PASSWORD: adminpassword
-      LDAP_USERS: user01,user02
-      LDAP_PASSWORDS: password1,password2
+      - LDAP_ADMIN_USERNAME=amdin
+      - LDAP_ADMIN_PASSWORD=adminpassword
+      - LDAP_USERS=user01,user02
+      - LDAP_PASSWORDS=password1,password2
+      - LDAP_ENABLE_TLS=yes
+      - LDAP_TLS_CERT_FILE=/certs/localhost_slapd_cert.pem
+      - LDAP_TLS_KEY_FILE=/certs/localhost_slapd_key.pem
+      - LDAP_TLS_CA_FILE=/certs/apisix.crt
     ports:
       - "1389:1389"
       - "1636:1636"
+    volumes:
+      - ./t/certs:/certs
 
 
   rocketmq_namesrv:
old mode 100644 (file)
new mode 100755 (executable)
index f03d31b..cbada9d
@@ -85,6 +85,8 @@ apisix:
     admin_ssl_cert_key: ""      # Path of your self-signed server side key.
     admin_ssl_ca_cert: ""       # Path of your self-signed ca cert.The CA is used to sign all admin api callers' certificates.
 
+  admin_api_version: v3        # The version of admin api, latest version is v3.
+
   # Default token when use API to call for Admin API.
   # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
   # Disabling this configuration item means that the Admin API does not
@@ -281,7 +283,7 @@ etcd:
     - "http://127.0.0.1:2379"     # multiple etcd address, if your etcd cluster enables TLS, please use https scheme,
                                   # e.g. https://127.0.0.1:2379.
   prefix: /apisix                 # apisix configurations prefix
-  timeout: 30                     # 30 seconds
+  #timeout: 30                    # 30 seconds
   #resync_delay: 5                # when sync failed and a rest is needed, resync after the configured seconds plus 50% random jitter
   #health_check_timeout: 10       # etcd retry the unhealthy nodes after the configured seconds
   startup_retry: 2                # the number of retry to etcd during the startup, default to 2
@@ -324,6 +326,70 @@ etcd:
 #      connect: 2000              # default 2000ms
 #      send: 2000                 # default 2000ms
 #      read: 5000                 # default 5000ms
+#  nacos:
+#    host:
+#      - "http://${username}:${password}@${host1}:${port1}"
+#    prefix: "/nacos/v1/"
+#    fetch_interval: 30    # default 30 sec
+#    weight: 100           # default 100
+#    timeout:
+#      connect: 2000       # default 2000 ms
+#      send: 2000          # default 2000 ms
+#      read: 5000          # default 5000 ms
+#  consul_kv:
+#    servers:
+#      - "http://127.0.0.1:8500"
+#      - "http://127.0.0.1:8600"
+#    prefix: "upstreams"
+#    skip_keys:                    # if you need to skip special keys
+#      - "upstreams/unused_api/"
+#    timeout:
+#      connect: 2000               # default 2000 ms
+#      read: 2000                  # default 2000 ms
+#      wait: 60                    # default 60 sec
+#    weight: 1                     # default 1
+#    fetch_interval: 3             # default 3 sec, only take effect for keepalive: false way
+#    keepalive: true               # default true, use the long pull way to query consul servers
+#    default_server:               # you can define default server when missing hit
+#      host: "127.0.0.1"
+#      port: 20999
+#      metadata:
+#        fail_timeout: 1           # default 1 ms
+#        weight: 1                 # default 1
+#        max_fails: 1              # default 1
+#    dump:                         # if you need, when registered nodes updated can dump into file
+#       path: "logs/consul_kv.dump"
+#       expire: 2592000            # unit sec, here is 30 day
+#  kubernetes:
+#    service:
+#      schema: https                     #apiserver schema, options [http, https], default https
+#      host: ${KUBERNETES_SERVICE_HOST}  #apiserver host, options [ipv4, ipv6, domain, environment variable], default ${KUBERNETES_SERVICE_HOST}
+#      port: ${KUBERNETES_SERVICE_PORT}  #apiserver port, options [port number, environment variable], default ${KUBERNETES_SERVICE_PORT}
+#    client:
+#      # serviceaccount token or path of serviceaccount token_file
+#      token_file: ${KUBERNETES_CLIENT_TOKEN_FILE}
+#      # token: |-
+#       # eyJhbGciOiJSUzI1NiIsImtpZCI6Ikx5ME1DNWdnbmhQNkZCNlZYMXBsT3pYU3BBS2swYzBPSkN3ZnBESGpkUEEif
+#       # 6Ikx5ME1DNWdnbmhQNkZCNlZYMXBsT3pYU3BBS2swYzBPSkN3ZnBESGpkUEEifeyJhbGciOiJSUzI1NiIsImtpZCI
+#    # kubernetes discovery plugin support use namespace_selector
+#    # you can use one of [equal, not_equal, match, not_match] filter namespace
+#    namespace_selector:
+#      # only save endpoints with namespace equal default
+#      equal: default
+#      # only save endpoints with namespace not equal default
+#      #not_equal: default
+#      # only save endpoints with namespace match one of [default, ^my-[a-z]+$]
+#      #match:
+#      #- default
+#      #- ^my-[a-z]+$
+#      # only save endpoints with namespace not match one of [default, ^my-[a-z]+$ ]
+#      #not_match:
+#      #- default
+#      #- ^my-[a-z]+$
+#    # kubernetes discovery plugin support use label_selector
+#    # for the expression of label_selector, please refer to https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
+#    label_selector: |-
+#      first="a",second="b"
 
 graphql:
   max_size: 1048576               # the maximum size limitation of graphql in bytes, default 1MiB
@@ -458,6 +524,20 @@ plugin_attr:
     export_addr:
       ip: 127.0.0.1
       port: 9091
+    #metrics:
+    #  http_status:
+    #    # extra labels from nginx variables
+    #    extra_labels:
+    #      # the label name doesn't need to be the same as variable name
+    #      # below labels are only examples, you could add any valid variables as you need
+    #      - upstream_addr: $upstream_addr
+    #      - upstream_status: $upstream_status
+    #  http_latency:
+    #    extra_labels:
+    #      - upstream_addr: $upstream_addr
+    #  bandwidth:
+    #    extra_labels:
+    #      - upstream_addr: $upstream_addr
   server-info:
     report_ttl: 60   # live time for server info in etcd (unit: second)
   dubbo-proxy:
index 247d9b3bc1520f191d41dc3dc8a8e4a3c2c626fe..1ea90c10ca34e3fbc51aa3e30116c74e7c11abb2 100644 (file)
       "timeShift": null,
       "title": "Nginx metric errors",
       "type": "stat"
+    },
+    {
+      "aliasColors": {},
+      "bars": false,
+      "dashLength": 10,
+      "dashes": false,
+      "datasource": "${DS_PROMETHEUS}",
+      "description": "The free space percent of each nginx shared DICT since APISIX start",
+      "fieldConfig": {
+        "defaults": {
+          "custom": {},
+          "links": []
+        },
+        "overrides": []
+      },
+      "fill": 1,
+      "fillGradient": 0,
+      "gridPos": {
+        "h": 8,
+        "w": 24,
+        "x": 0,
+        "y": 57
+      },
+      "hiddenSeries": false,
+      "id": 35,
+      "legend": {
+        "alignAsTable": false,
+        "avg": false,
+        "current": false,
+        "max": false,
+        "min": false,
+        "rightSide": false,
+        "show": true,
+        "total": false,
+        "values": false
+      },
+      "lines": true,
+      "linewidth": 1,
+      "nullPointMode": "null",
+      "options": {
+        "alertThreshold": true
+      },
+      "percentage": false,
+      "pluginVersion": "7.3.7",
+      "pointradius": 2,
+      "points": false,
+      "renderer": "flot",
+      "seriesOverrides": [],
+      "spaceLength": 10,
+      "stack": false,
+      "steppedLine": false,
+      "targets": [
+        {
+          "expr": "(apisix_shared_dict_free_space_bytes * 100) / on (name) apisix_shared_dict_capacity_bytes",
+          "instant": false,
+          "interval": "",
+          "intervalFactor": 1,
+          "legendFormat": "{{state}}",
+          "refId": "A"
+        }
+      ],
+      "thresholds": [],
+      "timeFrom": null,
+      "timeRegions": [],
+      "timeShift": null,
+      "title": "Nginx shared dict free space percent",
+      "tooltip": {
+        "shared": true,
+        "sort": 0,
+        "value_type": "individual"
+      },
+      "type": "graph",
+      "xaxis": {
+        "buckets": null,
+        "mode": "time",
+        "name": null,
+        "show": true,
+        "values": []
+      },
+      "yaxes": [
+        {
+          "$$hashKey": "object:117",
+          "decimals": null,
+          "format": "percent",
+          "label": "",
+          "logBase": 1,
+          "max": null,
+          "min": null,
+          "show": true
+        },
+        {
+          "$$hashKey": "object:118",
+          "decimals": null,
+          "format": "Misc",
+          "label": "",
+          "logBase": 1,
+          "max": null,
+          "min": null,
+          "show": true
+        }
+      ],
+      "yaxis": {
+        "align": false,
+        "alignLevel": null
+      }
     }
   ],
   "refresh": "5s",
index deaf1c4c7ce14bf6abb5d8cc1d5eb9d6addca906..8193f61528f5a5be69214629b72f80401aad2d38 100644 (file)
@@ -27,6 +27,92 @@ By default, the Admin API listens to port `9080` (`9443` for HTTPS) when APISIX
 
 **Note**: Mentions of `X-API-KEY` in this document refers to `apisix.admin_key.key`—the access token for Admin API—in your configuration file.
 
+## V3
+
+The Admin API has made some breaking changes in V3 version, as well as supporting additional features.
+
+### Support new response body format
+
+1. Remove `action` field in response body;
+2. Adjust the response body structure when fetching the list of resources, the new response body structure like:
+
+```json
+{
+    "count":2,
+    "list":[
+        {
+            ...
+        },
+        {
+            ...
+        }
+    ]
+}
+```
+
+### Support paging query
+
+Paging query is supported when getting the resource list, paging parameters include:
+
+| parameter | Default | Valid range | Description                  |
+| --------- | ------  | ----------- | ---------------------------- |
+| page      | 1       | [1, ...]    | Number of pages              |
+| page_size |         | [10, 500]   | Number of resources per page |
+
+The example is as follows:
+
+```shell
+$ curl http://127.0.0.1:9080/apisix/admin/routes?page=1&page_size=10 \
+-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
+{
+  "count": 1,
+  "list": [
+    {
+      ...
+    }
+  ]
+}
+```
+
+Resources that support paging queries:
+
+- Consumer
+- Global Rules
+- Plugin Config
+- Proto
+- Route
+- Service
+- SSL
+- Stream Route
+- Upstream
+
+### Support filtering query
+
+When getting a list of resources, it supports filtering resources based on `name`, `label`, `uri`.
+
+| parameter | parameter                                                    |
+| --------- | ------------------------------------------------------------ |
+| name      | Query resource by their `name`, which will not appear in the query results if the resource itself does not have `name`. |
+| label     | Query resource by their `label`, which will not appear in the query results if the resource itself does not have `label`. |
+| uri       | Supported on Route resources only. If the `uri` of a Route is equal to the uri of the query or if the `uris` contains the uri of the query, the Route resource appears in the query results. |
+
+When multiple filter parameters are enabled, use the intersection of the query results for different filter parameters.
+
+The following example will return a list of routes, and all routes in the list satisfy: the `name` of the route contains the string "test", the `uri` contains the string "foo", and there is no restriction on the `label` of the route, since the label of the query is the empty string.
+
+```shell
+$ curl http://127.0.0.1:9080/apisix/admin/routes?name=test&uri=foo&label= \
+-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
+{
+  "count": 1,
+  "list": [
+    {
+      ...
+    }
+  ]
+}
+```
+
 ## Route
 
 **API**: /apisix/admin/routes/{id}?ttl=0
@@ -486,7 +572,7 @@ HTTP/1.1 200 OK
 Date: Thu, 26 Dec 2019 08:17:49 GMT
 ...
 
-{"node":{"value":{"username":"jack","plugins":{"key-auth":{"key":"auth-one"},"limit-count":{"time_window":60,"count":2,"rejected_code":503,"key":"remote_addr","policy":"local"}}},"createdIndex":64,"key":"\/apisix\/consumers\/jack","modifiedIndex":64},"prevNode":{"value":"{\"username\":\"jack\",\"plugins\":{\"key-auth\":{\"key\":\"auth-one\"},\"limit-count\":{\"time_window\":60,\"count\":2,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"}}}","createdIndex":63,"key":"\/apisix\/consumers\/jack","modifiedIndex":63},"action":"set"}
+{"node":{"value":{"username":"jack","plugins":{"key-auth":{"key":"auth-one"},"limit-count":{"time_window":60,"count":2,"rejected_code":503,"key":"remote_addr","policy":"local"}}},"createdIndex":64,"key":"\/apisix\/consumers\/jack","modifiedIndex":64},"prevNode":{"value":"{\"username\":\"jack\",\"plugins\":{\"key-auth\":{\"key\":\"auth-one\"},\"limit-count\":{\"time_window\":60,\"count\":2,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"}}}","createdIndex":63,"key":"\/apisix\/consumers\/jack","modifiedIndex":63}}
 ```
 
 Since `v2.2`, we can bind multiple authentication plugins to the same consumer.
@@ -524,7 +610,7 @@ In addition to the equalization algorithm selections, Upstream also supports pas
 | Name                        | Optional                                    | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      | Example                                                                                                                                    |
 | --------------------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
 | type                        | required                                    | Load balancing algorithm to be used.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |                                                                                                                                            |
-| nodes                       | required, can't be used with `service_name` | IP addresses (with optional ports) of the Upstream nodes represented as a hash table or an array. In the hash table, the key is the IP address and the value is the weight of the node for the load balancing algorithm. In the array, each item is a hash table with keys `host`, `weight`, and the optional `port` and `priority`. Empty nodes are treated as placeholders and clients trying to access this Upstream will receive a 502 response.                                                                                                                                                                                                                                                                             | `192.168.1.100:80`                                                                                                                         |
+| nodes                       | required, can't be used with `service_name` | IP addresses (with optional ports) of the Upstream nodes represented as a hash table or an array. In the hash table, the key is the IP address and the value is the weight of the node for the load balancing algorithm. For hash table case, if the key is IPv6 address with port, then the IPv6 address must be quoted with square brackets. In the array, each item is a hash table with keys `host`, `weight`, and the optional `port` and `priority`. Empty nodes are treated as placeholders and clients trying to access this Upstream will receive a 502 response.                                                                                                                                                                                                                                                                             | `192.168.1.100:80`, `[::1]:80`                                                                                                                         |
 | service_name                | required, can't be used with `nodes`        | Service name used for [service discovery](discovery.md).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         | `a-bootiful-client`                                                                                                                        |
 | discovery_type              | required, if `service_name` is used         | The type of service [discovery](discovery.md).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   | `eureka`                                                                                                                                   |
 | hash_on                     | optional                                    | Only valid if the `type` is `chash`. Supports Nginx variables (`vars`), custom headers (`header`), `cookie` and `consumer`. Defaults to `vars`.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |                                                                                                                                            |
@@ -766,17 +852,17 @@ Currently, the response is returned from etcd.
 
 ## SSL
 
-**API**:/apisix/admin/ssl/{id}
+**API**:/apisix/admin/ssls/{id}
 
 ### Request Methods
 
 | Method | Request URI            | Request Body | Description                                     |
 | ------ | ---------------------- | ------------ | ----------------------------------------------- |
-| GET    | /apisix/admin/ssl      | NULL         | Fetches a list of all configured SSL resources. |
-| GET    | /apisix/admin/ssl/{id} | NULL         | Fetch specified resource by id.                 |
-| PUT    | /apisix/admin/ssl/{id} | {...}        | Creates a resource with the specified id.           |
-| POST   | /apisix/admin/ssl      | {...}        | Creates a resource and assigns a random id.           |
-| DELETE | /apisix/admin/ssl/{id} | NULL         | Removes the resource with the specified id.     |
+| GET    | /apisix/admin/ssls      | NULL         | Fetches a list of all configured SSL resources. |
+| GET    | /apisix/admin/ssls/{id} | NULL         | Fetch specified resource by id.                 |
+| PUT    | /apisix/admin/ssls/{id} | {...}        | Creates a resource with the specified id.           |
+| POST   | /apisix/admin/ssls      | {...}        | Creates a resource and assigns a random id.           |
+| DELETE | /apisix/admin/ssls/{id} | NULL         | Removes the resource with the specified id.     |
 
 ### Request Body Parameters
 
index 8bef62289192a211b47c0e2f9674a1bc42a7a350..0a6139c6f3a941844178a7d246c5caefab5ef8dd 100644 (file)
@@ -25,42 +25,21 @@ title: APISIX
 
 ![flow-software-architecture](../../../assets/images/flow-software-architecture.png)
 
-## Plugin Loading Process
-
-![flow-load-plugin](../../../assets/images/flow-load-plugin.png)
-
-## Plugin Hierarchy Structure
-
-![flow-plugin-internal](../../../assets/images/flow-plugin-internal.png)
-
-## Configuring APISIX
+Apache APISIX is a dynamic, real-time, high-performance cloud-native API gateway. It is built on top of NGINX + ngx_lua technology and leverages the power offered by LuaJIT. [Why Apache APISIX chose Nginx and Lua to build API Gateway?](https://apisix.apache.org/blog/2021/08/25/why-apache-apisix-chose-nginx-and-lua/)
 
-Apache APISIX can be configured in two ways:
+APISIX is divided into two main parts:
 
-1. By directly changing `conf/config.yaml`.
-2. Using the `--config` or the `-c` flag to pass in the file path of your config file while starting APISIX (`apisix start -c <path to config file>`).
+1. APISIX core, including Lua plugin, multi-language plugin runtime, Wasm plugin runtime, etc.
+2. Feature-rich variety of built-in plugins: including observability, security, traffic control, etc.
 
-Configurations can be added to this YAML file and Apache APISIX will fall back to the default configurations for anything that is not configured in this file.
+In the APISIX core, important functions such as route matching, load balancing, service discovery, management API, and basic modules such as configuration management are provided. In addition, APISIX plugin runtime is also included, providing the runtime framework for native Lua plugins and multilingual plugins, as well as the experimental Wasm plugin runtime, etc. APISIX multilingual plugin runtime provides support for various development languages, such as Golang, Java, Python, JS, etc.
 
-For example, to set the default listening port to 8000 while keeping other configurations as default, your configuration file (`config.yaml`) would look like:
+APISIX currently has various plugins built in, covering various areas of API gateways, such as authentication and authentication, security, observability, traffic management, multi-protocol access, and so on. The plugins currently built into APISIX are implemented using native Lua. For the introduction and usage of each plugin, please check the [documentation](https://apisix.apache.org/docs/apisix/plugins/batch-requests) of the relevant plugin.
 
-```yaml
-apisix:
-  node_listen: 8000 # APISIX listening port
-```
-
-Similarly, to set the listening port to 8000 and set the etcd address to `http://foo:2379` while keeping other configurations as default, your configuration file would look like:
-
-```yaml
-apisix:
-  node_listen: 8000 # APISIX listening port
-
-etcd:
-  host: "http://foo:2379" # etcd address
-```
+## Plugin Loading Process
 
-Default configurations of APISIX can be found in the `conf/config-default.yaml` file.
+![flow-load-plugin](../../../assets/images/flow-load-plugin.png)
 
-**Note**: This file is bound to the APISIX source code and should **NOT** be modified. The configuration should only be changed by the methods mentioned above.
+## Plugin Hierarchy Structure
 
-**Note**: The `conf/nginx.conf` file is automatically generated by APISIX and should **NOT** be edited.
+![flow-plugin-internal](../../../assets/images/flow-plugin-internal.png)
diff --git a/docs/en/latest/architecture-design/debug-mode.md b/docs/en/latest/architecture-design/debug-mode.md
deleted file mode 100644 (file)
index 479bdec..0000000
+++ /dev/null
@@ -1,110 +0,0 @@
----
-title: Debug Mode
----
-
-<!--
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
--->
-
-### Basic Debug Mode
-
-You can enable the basic debug mode by adding this line to your `conf/debug.yaml` file.
-
-```
-basic:
-  enable: true
-```
-
-**Note**: Before Apache APISIX 2.10, basic debug mode was enabled by setting `apisix.enable_debug = true` in the `conf/config.yaml` file.
-
-For example, if we are using two plugins `limit-conn` and `limit-count` for a Route `/hello`, we will receive a response with the header `Apisix-Plugins: limit-conn, limit-count` when we enable the basic debug mode.
-
-```shell
-$ curl http://127.0.0.1:1984/hello -i
-HTTP/1.1 200 OK
-Content-Type: text/plain
-Transfer-Encoding: chunked
-Connection: keep-alive
-Apisix-Plugins: limit-conn, limit-count
-X-RateLimit-Limit: 2
-X-RateLimit-Remaining: 1
-Server: openresty
-
-hello world
-```
-
-If the debug information cannot be included in a response header (say when the plugin is in a stream subsystem), the information will be logged in the error log at a `warn` level.
-
-### Advanced Debug Mode
-
-Advanced debug mode can also be enabled by modifying the configuration in the `conf/debug.yaml` file.
-
-Enable advanced debug mode by modifying the configuration in `conf/debug.yaml` file.
-
-The checker checks every second for changes to the configuration files. An `#END` flag is added to let the checker know that it should only look for changes till that point.
-
-The checker would only check this if the file was updated by checking its last modification time.
-
-| Key                             | Optional | Description                                                                                                                               | Default |
-| ------------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------- | ------- |
-| hook_conf.enable                | required | Enable/Disable hook debug trace. Target module function's input arguments or returned value would be printed once this option is enabled. | false   |
-| hook_conf.name                  | required | The module list name of the hook which has enabled debug trace.                                                                           |         |
-| hook_conf.log_level             | required | Logging levels for input arguments & returned values.                                                                                     | warn    |
-| hook_conf.is_print_input_args   | required | Enable/Disable printing input arguments.                                                                                                  | true    |
-| hook_conf.is_print_return_value | required | Enable/Disable printing returned values.                                                                                                  | true    |
-
-Example:
-
-```yaml
-hook_conf:
-  enable: false # Enable/Disable Hook Debug Trace
-  name: hook_phase # The Module List Name of Hook which has enabled Debug Trace
-  log_level: warn # Logging Levels
-  is_print_input_args: true # Enable/Disable Input Arguments Print
-  is_print_return_value: true # Enable/Disable Returned Value Print
-
-hook_phase: # Module Function List, Name: hook_phase
-  apisix: # Referenced Module Name
-    - http_access_phase # Function Names:Array
-    - http_header_filter_phase
-    - http_body_filter_phase
-    - http_log_phase
-#END
-```
-
-### Enable Advanced Debug Mode Dynamically
-
-You can also enable the advanced debug mode to take effect on particular requests.
-
-For example, to dynamically enable advanced debugging mode on requests with a particular header name `X-APISIX-Dynamic-Debug` you can configure:
-
-```yaml
-http_filter:
-  enable: true # Enable/Disable Advanced Debug Mode Dynamically
-  enable_header_name: X-APISIX-Dynamic-Debug # Trace for the request with this header
-......
-#END
-```
-
-This will enable the advanced debug mode for requests like:
-
-```shell
-curl 127.0.0.1:9090/hello --header 'X-APISIX-Dynamic-Debug: foo'
-```
-
-**Note**: The `apisix.http_access_phase` module cannot be hooked for dynamic rules as the advanced debug mode is enabled based on the request.
index 5e750e7f17dd9b052b9a74f763f7f8f23c2f75c6..465675e83afc974e652b102d7567344214dfffc9 100644 (file)
@@ -123,6 +123,29 @@ deployment:
         trusted_ca_cert: /path/to/ca-cert
 ```
 
+As OpenResty <= 1.21.4 doesn't support sending mTLS request, if you need to accept the connections from APISIX running on these OpenResty versions,
+you need to disable client certificate verification in the CP instance.
+
+Here is the example of configuration:
+
+```yaml title="conf/config.yaml"
+deployment:
+    role: control_plane
+    role_control_plan:
+        config_provider: etcd
+        conf_server:
+            listen: 0.0.0.0:9280
+            cert: /path/to/ca-cert
+            cert_key: /path/to/ca-cert
+    etcd:
+       host:
+           - https://xxxx
+       prefix: /apisix
+       timeout: 30
+    certs:
+        trusted_ca_cert: /path/to/ca-cert
+```
+
 ### Standalone
 
 In this mode, APISIX is deployed as DP and reads configurations from yaml file in the local file system.
index 1fd7246e6d1ba86daa0164f2c562418d42cc987b..64574cb987a094df98584a9e368ebdfc97b83fcc 100644 (file)
@@ -52,7 +52,7 @@ curl https://raw.githubusercontent.com/apache/apisix/master/utils/install-depend
 Then, create a directory and set the environment variable `APISIX_VERSION`:
 
 ```shell
-APISIX_VERSION='2.14.1'
+APISIX_VERSION='2.15.0'
 mkdir apisix-${APISIX_VERSION}
 ```
 
index 5507e5ee3ba8d2b7c9a5ea8f718fa79a22ace3f2..156e2f594c3f17e9b1d4cdd57837e6ce90f6760b 100644 (file)
@@ -50,7 +50,7 @@ with open(sys.argv[2]) as f:
     key = f.read()
 sni = sys.argv[3]
 api_key = "edd1c9f034335f136f87ad84b625c8f1"
-resp = requests.put("http://127.0.0.1:9080/apisix/admin/ssl/1", json={
+resp = requests.put("http://127.0.0.1:9080/apisix/admin/ssls/1", json={
     "cert": cert,
     "key": key,
     "snis": [sni],
@@ -171,3 +171,146 @@ private keys by `certs` and `keys`.
 
 `APISIX` will pair certificate and private key with the same indice as a SSL key
 pair. So the length of `certs` and `keys` must be same.
+
+### set up multiple CA certificates
+
+APISIX currently uses CA certificates in several places, such as [Protect Admin API](./mtls.md#protect-admin-api), [etcd with mTLS](./mtls.md#etcd-with-mtls), and [Deployment Modes](./architecture-design/deployment-role.md).
+
+In these places, `ssl_trusted_certificate` or `trusted_ca_cert` will be used to set up the CA certificate, but these configurations will eventually be translated into [lua_ssl_trusted_certificate](https://github.com/openresty/lua-nginx-module#lua_ssl_trusted_certificate) directive in OpenResty.
+
+If you need to set up different CA certificates in different places, then you can package these CA certificates into a CA bundle file and point to this file when you need to set up CAs. This will avoid the problem that the generated `lua_ssl_trusted_certificate` has multiple locations and overwrites each other.
+
+The following is a complete example to show how to set up multiple CA certificates in APISIX.
+
+Suppose we let client and APISIX Admin API, APISIX and ETCD communicate with each other using mTLS protocol, and currently there are two CA certificates, `foo_ca.crt` and `bar_ca.crt`, and use each of these two CA certificates to issue client and server certificate pairs, `foo_ca.crt` and its issued certificate pair are used to protect Admin API, and `bar_ca.crt` and its issued certificate pair are used to protect ETCD.
+
+The following table details the configurations involved in this example and what they do:
+
+| Configuration    | Type     | Description                                                                                                                                                                  |
+| -------------    | -------  | -------------------------------------------------------------------------------------------------------------------------------------------------------------                |
+| foo_ca.crt       | CA cert  | Issues the secondary certificate required for the client to communicate with the APISIX Admin API over mTLS.                                                                 |
+| foo_client.crt   | cert     | A certificate issued by `foo_ca.crt` and used by the client to prove its identity when accessing the APISIX Admin API.                                                       |
+| foo_client.key   | key      | Issued by `foo_ca.crt`, used by the client, the key file required to access the APISIX Admin API.                                                                            |
+| foo_server.crt   | cert     | Issued by `foo_ca.crt`, used by APISIX, corresponding to the `apisix.admin_api_mtls.admin_ssl_cert` configuration entry.                                                     |
+| foo_server.key   | key      | Issued by `foo_ca.crt`, used by APISIX, corresponding to the `apisix.admin_api_mtls.admin_ssl_cert_key` configuration entry.                                                 |
+| admin.apisix.dev | doname   | Common Name used in issuing `foo_server.crt` certificate, through which the client accesses APISIX Admin API                                                                 |
+| bar_ca.crt       | CA cert  | Issues the secondary certificate required for APISIX to communicate with ETCD over mTLS.                                                                                     |
+| bar_etcd.crt     | cert     | Issued by `bar_ca.crt` and used by ETCD, corresponding to the `-cert-file` option in the ETCD startup command.                                                               |
+| bar_etcd.key     | key      | Issued by `bar_ca.crt` and used by ETCD, corresponding to the `--key-file` option in the ETCD startup command.                                                               |
+| bar_apisix.crt   | cert     | Issued by `bar_ca.crt`, used by APISIX, corresponding to the `etcd.tls.cert` configuration entry.                                                                            |
+| bar_apisix.key   | key      | Issued by `bar_ca.crt`, used by APISIX, corresponding to the `etcd.tls.key` configuration entry.                                                                             |
+| etcd.cluster.dev | key      | Common Name used in issuing `bar_etcd.crt` certificate, which is used as SNI when APISIX communicates with ETCD over mTLS. corresponds to `etcd.tls.sni` configuration item. |
+| apisix.ca-bundle | CA bundle | Merged from `foo_ca.crt` and `bar_ca.crt`, replacing `foo_ca.crt` and `bar_ca.crt`.                                                                                         |
+
+1. Create CA bundle files
+
+```
+cat /path/to/foo_ca.crt /path/to/bar_ca.crt > apisix.ca-bundle
+```
+
+2. Start the ETCD cluster and enable client authentication
+
+Start by writing a `goreman` configuration named `Procfile-single-enable-mtls`, the content as:
+
+```text
+# Use goreman to run `go get github.com/mattn/goreman`
+etcd1: etcd --name infra1 --listen-client-urls https://127.0.0.1:12379 --advertise-client-urls https://127.0.0.1:12379 --listen-peer-urls http://127.0.0.1:12380 --initial-advertise-peer-urls http://127.0.0.1:12380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --cert-file /path/to/bar_etcd.crt --key-file /path/to/bar_etcd.key --client-cert-auth --trusted-ca-file /path/to/apisix.ca-bundle
+etcd2: etcd --name infra2 --listen-client-urls https://127.0.0.1:22379 --advertise-client-urls https://127.0.0.1:22379 --listen-peer-urls http://127.0.0.1:22380 --initial-advertise-peer-urls http://127.0.0.1:22380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --cert-file /path/to/bar_etcd.crt --key-file /path/to/bar_etcd.key --client-cert-auth --trusted-ca-file /path/to/apisix.ca-bundle
+etcd3: etcd --name infra3 --listen-client-urls https://127.0.0.1:32379 --advertise-client-urls https://127.0.0.1:32379 --listen-peer-urls http://127.0.0.1:32380 --initial-advertise-peer-urls http://127.0.0.1:32380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --cert-file /path/to/bar_etcd.crt --key-file /path/to/bar_etcd.key --client-cert-auth --trusted-ca-file /path/to/apisix.ca-bundle
+```
+
+Use `goreman` to start the ETCD cluster:
+
+```shell
+goreman -f Procfile-single-enable-mtls start > goreman.log 2>&1 &
+```
+
+3. Update `config.yaml`
+
+```yaml
+apisix:
+  admin_key:
+    - name: admin
+      key: edd1c9f034335f136f87ad84b625c8f1
+      role: admin
+  port_admin: 9180
+  https_admin: true
+
+  admin_api_mtls:
+    admin_ssl_ca_cert: /path/to/apisix.ca-bundle
+    admin_ssl_cert: /path/to/foo_server.crt
+    admin_ssl_cert_key: /path/to/foo_server.key
+
+  ssl:
+    ssl_trusted_certificate: /path/to/apisix.ca-bundle
+
+etcd:
+  host:
+    - "https://127.0.0.1:12379"
+    - "https://127.0.0.1:22379"
+    - "https://127.0.0.1:32379"
+  tls:
+    cert: /path/to/bar_apisix.crt
+    key: /path/to/bar_apisix.key
+    sni: etcd.cluster.dev
+```
+
+4. Test APISIX Admin API
+
+Start APISIX, if APISIX starts successfully and there is no abnormal output in `logs/error.log`, it means that mTLS communication between APISIX and ETCD is normal.
+
+Use curl to simulate a client, communicate with APISIX Admin API with mTLS, and create a route:
+
+```shell
+curl -vvv \
+    --resolve 'admin.apisix.dev:9180:127.0.0.1' https://admin.apisix.dev:9180/apisix/admin/routes/1 \
+    --cert /path/to/foo_client.crt \
+    --key /path/to/foo_client.key \
+    --cacert /path/to/apisix.ca-bundle \
+    -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
+{
+    "uri": "/get",
+    "upstream": {
+        "type": "roundrobin",
+        "nodes": {
+            "httpbin.org:80": 1
+        }
+    }
+}'
+```
+
+A successful mTLS communication between curl and the APISIX Admin API is indicated if the following SSL handshake process is output:
+
+```shell
+* TLSv1.3 (OUT), TLS handshake, Client hello (1):
+* TLSv1.3 (IN), TLS handshake, Server hello (2):
+* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
+* TLSv1.3 (IN), TLS handshake, Request CERT (13):
+* TLSv1.3 (IN), TLS handshake, Certificate (11):
+* TLSv1.3 (IN), TLS handshake, CERT verify (15):
+* TLSv1.3 (IN), TLS handshake, Finished (20):
+* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
+* TLSv1.3 (OUT), TLS handshake, Certificate (11):
+* TLSv1.3 (OUT), TLS handshake, CERT verify (15):
+* TLSv1.3 (OUT), TLS handshake, Finished (20):
+* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
+```
+
+5. Verify APISIX proxy
+
+```shell
+curl http://127.0.0.1:9080/get -i
+
+HTTP/1.1 200 OK
+Content-Type: application/json
+Content-Length: 298
+Connection: keep-alive
+Date: Tue, 26 Jul 2022 16:31:00 GMT
+Access-Control-Allow-Origin: *
+Access-Control-Allow-Credentials: true
+Server: APISIX/2.14.1
+
+...
+```
+
+APISIX proxied the request to the `/get` path of the upstream `httpbin.org` and returned `HTTP/1.1 200 OK`. The whole process is working fine using CA bundle instead of CA certificate.
index 46c6ab4e9ce613a190423f09c923e6acc40de0a6..939a5cac91c35dde7156b5258fb7f4a34a0dd829 100644 (file)
@@ -1,13 +1,11 @@
 {
-  "version": "2.14.1",
+  "version": "2.15.0",
   "sidebar": [
     {
       "type": "category",
       "label": "Architecture Design",
       "items": [
         "architecture-design/apisix",
-        "architecture-design/plugin-config",
-        "architecture-design/debug-mode",
         "architecture-design/deployment-role"
       ]
     },
@@ -19,6 +17,7 @@
         "terminology/consumer",
         "terminology/global-rule",
         "terminology/plugin",
+        "terminology/plugin-config",
         "terminology/route",
         "terminology/router",
         "terminology/script",
         {
           "type": "doc",
           "id": "building-apisix"
+        },
+        {
+          "type": "doc",
+          "id": "debug-mode"
         }
       ]
     },
index ff7afc28f6ffbbfb038adc02c78d382374465c95..7dd55e24fd78839065fb34f4ee369eae378b708f 100644 (file)
@@ -214,7 +214,7 @@ Triggers a full garbage collection in the HTTP subsystem.
 
 ### GET /v1/routes
 
-Introduced in [v3.0](https://github.com/apache/apisix/releases/tag/3.0).
+Introduced in [v2.10.0](https://github.com/apache/apisix/releases/tag/2.10.0).
 
 Returns all configured [Routes](./terminology/route.md):
 
@@ -254,7 +254,7 @@ Returns all configured [Routes](./terminology/route.md):
 
 ### GET /v1/route/{route_id}
 
-Introduced in [v3.0](https://github.com/apache/apisix/releases/tag/3.0).
+Introduced in [v2.10.0](https://github.com/apache/apisix/releases/tag/2.10.0).
 
 Returns the Route with the specified `route_id`:
 
@@ -292,7 +292,7 @@ Returns the Route with the specified `route_id`:
 
 ### GET /v1/services
 
-Introduced in [v2.11](https://github.com/apache/apisix/releases/tag/2.11).
+Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
 
 Returns all the Services:
 
@@ -340,7 +340,7 @@ Returns all the Services:
 
 ### GET /v1/service/{service_id}
 
-Introduced in [v2.11](https://github.com/apache/apisix/releases/tag/2.11).
+Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
 
 Returns the Service with the specified `service_id`:
 
@@ -374,7 +374,7 @@ Returns the Service with the specified `service_id`:
 
 ### GET /v1/upstreams
 
-Introduced in [v2.11](https://github.com/apache/apisix/releases/tag/2.11).
+Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
 
 Dumps all Upstreams:
 
@@ -415,7 +415,7 @@ Dumps all Upstreams:
 
 ### GET /v1/upstream/{upstream_id}
 
-Introduced in [v2.11](https://github.com/apache/apisix/releases/tag/2.11).
+Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
 
 Dumps the Upstream with the specified `upstream_id`:
 
@@ -451,3 +451,40 @@ Dumps the Upstream with the specified `upstream_id`:
    "modifiedIndex":1225
 }
 ```
+
+### GET /v1/plugin_metadatas
+
+Introduced in [v3.0.0](https://github.com/apache/apisix/releases/tag/3.0.0).
+
+Dumps all plugin_metadatas:
+
+```json
+[
+    {
+        "log_format": {
+            "upstream_response_time": "$upstream_response_time"
+        },
+        "id": "file-logger"
+    },
+    {
+        "ikey": 1,
+        "skey": "val",
+        "id": "example-plugin"
+    }
+]
+```
+
+### GET /v1/plugin_metadata/{plugin_name}
+
+Introduced in [v3.0.0](https://github.com/apache/apisix/releases/tag/3.0.0).
+
+Dumps the metadata with the specified `plugin_name`:
+
+```json
+{
+    "log_format": {
+        "upstream_response_time": "$upstream_response_time"
+    },
+    "id": "file-logger"
+}
+```
diff --git a/docs/en/latest/debug-mode.md b/docs/en/latest/debug-mode.md
new file mode 100644 (file)
index 0000000..e1438d0
--- /dev/null
@@ -0,0 +1,137 @@
+---
+id: debug-mode
+title: Debug mode
+keywords:
+  - API gateway
+  - Apache APISIX
+  - Debug mode
+description: Guide for enabling debug mode in Apache APISIX.
+---
+
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+You can use APISIX's debug mode to troubleshoot your configuration.
+
+## Basic debug mode
+
+You can enable the basic debug mode by adding this line to your debug configuration file (`conf/debug.yaml`):
+
+```yaml title="conf/debug.yaml"
+basic:
+  enable: true
+```
+
+:::note
+
+For APISIX releases prior to v2.10, basic debug mode is enabled by setting `apisix.enable_debug = true` in your configuration file (`conf/config.yaml`).
+
+:::
+
+If you have configured two Plgins `limit-conn` and `limit-count` on the Route `/hello`, you will receive a response with the header `Apisix-Plugins: limit-conn, limit-count` when you enable the basic debug mode.
+
+```shell
+curl http://127.0.0.1:1984/hello -i
+```
+
+```shell
+HTTP/1.1 200 OK
+Content-Type: text/plain
+Transfer-Encoding: chunked
+Connection: keep-alive
+Apisix-Plugins: limit-conn, limit-count
+X-RateLimit-Limit: 2
+X-RateLimit-Remaining: 1
+Server: openresty
+
+hello world
+```
+
+:::info IMPORTANT
+
+If the debug information cannot be included in a response header (for example, when the Plugin is in a stream subsystem), the debug information will be logged as an error log at a `warn` level.
+
+:::
+
+## Advanced debug mode
+
+You can configure advanced options in debug mode by modifying your debug configuration file (`conf/debug.yaml`).
+
+The following configurations are available:
+
+| Key                             | Required | Default | Description                                                                                                           |
+|---------------------------------|----------|---------|-----------------------------------------------------------------------------------------------------------------------|
+| hook_conf.enable                | True     | false   | Enables/disables hook debug trace. i.e. if enabled, will print the target module function's inputs or returned value. |
+| hook_conf.name                  | True     |         | Module list name of the hook that enabled the debug trace.                                                            |
+| hook_conf.log_level             | True     | warn    | Log level for input arguments & returned values.                                                                      |
+| hook_conf.is_print_input_args   | True     | true    | When set to `true` enables printing input arguments.                                                                  |
+| hook_conf.is_print_return_value | True     | true    | When set to `true` enables printing returned values.                                                                  |
+
+:::note
+
+A checker would check every second for changes to the configuration file. It will only check a file if the file was updated based on its last modification time.
+
+You can add an `#END` flag to indicate to the checker to only look for changes until that point.
+
+:::
+
+The example below shows how you can configure advanced options in debug mode:
+
+```yaml title="conf/debug.yaml"
+hook_conf:
+  enable: false # Enables/disables hook debug trace
+  name: hook_phase # Module list name of the hook that enabled the debug trace
+  log_level: warn # Log level for input arguments & returned values
+  is_print_input_args: true # When set to `true` enables printing input arguments
+  is_print_return_value: true # When set to `true` enables printing returned values
+
+hook_phase: # Module function list, Name: hook_phase
+  apisix: # Referenced module name
+    - http_access_phase # Function names:Array
+    - http_header_filter_phase
+    - http_body_filter_phase
+    - http_log_phase
+#END
+```
+
+### Dynamically enable advanced debug mode
+
+You can also enable advanced debug mode only on particular requests.
+
+The example below shows how you can enable it on requests with the header `X-APISIX-Dynamic-Debug`:
+
+```yaml title="conf/debug.yaml"
+http_filter:
+  enable: true # Enable/disable advanced debug mode dynamically
+  enable_header_name: X-APISIX-Dynamic-Debug # Trace for the request with this header
+...
+#END
+```
+
+This will enable the advanced debug mode only for requests like:
+
+```shell
+curl 127.0.0.1:9090/hello --header 'X-APISIX-Dynamic-Debug: foo'
+```
+
+:::note
+
+The `apisix.http_access_phase` module cannot be hooked for this dynamic rule as the advanced debug mode is enabled based on the request.
+
+:::
index 5442c874eb2910333129c6013364185e4b0ad1ea..cd11811e70b7cf64640893d1c54eb284115369b9 100644 (file)
@@ -51,7 +51,7 @@ It is very easy for APISIX to extend the discovery client, the basic steps are a
 
 First, create a directory `eureka` under `apisix/discovery`;
 
-After that, add [`init.lua`](../../../apisix/discovery/eureka/init.lua) in the `apisix/discovery/eureka` directory;
+After that, add [`init.lua`](https://github.com/apache/apisix/blob/master/apisix/discovery/init.lua) in the `apisix/discovery/eureka` directory;
 
 Then implement the `_M.init_worker()` function for initialization and the `_M.nodes(service_name)` function for obtaining the list of service instance nodes in `init.lua`:
 
@@ -202,7 +202,7 @@ Transfer-Encoding: chunked
 Connection: keep-alive
 Server: APISIX web server
 
-{"node":{"value":{"uri":"\/user\/*","upstream": {"service_name": "USER-SERVICE", "type": "roundrobin", "discovery_type": "eureka"}},"createdIndex":61925,"key":"\/apisix\/routes\/1","modifiedIndex":61925},"action":"create"}
+{"node":{"value":{"uri":"\/user\/*","upstream": {"service_name": "USER-SERVICE", "type": "roundrobin", "discovery_type": "eureka"}},"createdIndex":61925,"key":"\/apisix\/routes\/1","modifiedIndex":61925}}
 ```
 
 Because the upstream interface URL may have conflict, usually in the gateway by prefix to distinguish:
index 7826c66b753b51f1e74d30329641d8ae49ce3c51..e4dbde39be6c3349845183f47d787e644b75b6c7 100644 (file)
@@ -166,8 +166,7 @@ The format response as below:
       "status": 1
     },
     "key": "/apisix/routes/1"
-  },
-  "action": "set"
+  }
 }
 ```
 
index 0bf74312895127d2716e9b45e3563e80ddba6094..eb931ba35077c46ec2f9b82d9057f5b7d4e69755 100644 (file)
@@ -52,6 +52,8 @@ discovery:
        # eyJhbGciOiJSUzI1NiIsImtpZCI6Ikx5ME1DNWdnbmhQNkZCNlZYMXBsT3pYU3BBS2swYzBPSkN3ZnBESGpkUEEif
        # 6Ikx5ME1DNWdnbmhQNkZCNlZYMXBsT3pYU3BBS2swYzBPSkN3ZnBESGpkUEEifeyJhbGciOiJSUzI1NiIsImtpZCI
 
+    default_weight: 50 # weight assigned to each discovered endpoint. default 50, minimum 0
+
     # kubernetes discovery plugin support use namespace_selector
     # you can use one of [equal, not_equal, match, not_match] filter namespace
     namespace_selector:
index a35a2ac2890b3f27e7cc50320f8eb3d088e39910..6974ea43f6282ed5123dd0ce35c5d3a365053830 100644 (file)
@@ -92,8 +92,7 @@ The formatted response as below:
       "priority": 0,
       "uri": "\/nacos\/*"
     }
-  },
-  "action": "set"
+  }
 }
 ```
 
@@ -148,8 +147,7 @@ The formatted response as below:
       "priority": 0,
       "uri": "\/nacosWithNamespaceId\/*"
     }
-  },
-  "action": "set"
+  }
 }
 ```
 
@@ -197,8 +195,7 @@ The formatted response as below:
       "priority": 0,
       "uri": "\/nacosWithGroupName\/*"
     }
-  },
-  "action": "set"
+  }
 }
 ```
 
@@ -248,7 +245,6 @@ The formatted response as below:
       "priority": 0,
       "uri": "\/nacosWithNamespaceIdAndGroupName\/*"
     }
-  },
-  "action": "set"
+  }
 }
 ```
index 8b393fcc828e17f3fbeb02aea4e0c3fa1b79e5f2..050d6ae0c7dad34cb4a6dadf6e0318b542059108 100644 (file)
@@ -189,7 +189,6 @@ This response indicates that APISIX is running successfully:
 ```json
 {
   "count":0,
-  "action":"get",
   "node":{
     "key":"/apisix/services",
     "nodes":[],
index 08ada40c185c730b33edea7274c9c19d59d8f6cc..0deacec48c3790f18a442ee99677b125e78a35f3 100644 (file)
@@ -31,8 +31,6 @@ title: Install Dependencies
 
 - On some platforms, installing LuaRocks via the package manager will cause Lua to be upgraded to Lua 5.3, so we recommend installing LuaRocks via source code. if you install OpenResty and its OpenSSL develop library (openresty-openssl111-devel for rpm and openresty-openssl111-dev for deb) via the official repository, then [we provide a script for automatic installation](https://github.com/apache/apisix/blob/master/utils/linux-install-luarocks.sh). If you compile OpenResty yourself, you can refer to the above script and change the path in it. If you don't specify the OpenSSL library path when you compile, you don't need to configure the OpenSSL variables in LuaRocks, because the system's OpenSSL is used by default. If the OpenSSL library is specified at compile time, then you need to ensure that LuaRocks' OpenSSL configuration is consistent with OpenResty's.
 
-- WARNING: If you are using OpenResty which is older than `1.17.8`, please installing openresty-openss-devel instead of openresty-openssl111-devel.
-
 - OpenResty is a dependency of APISIX. If it is your first time to deploy APISIX and you don't need to use OpenResty to deploy other services, you can stop and disable OpenResty after installation since it will not affect the normal work of APISIX. Please operate carefully according to your service. For example in Ubuntu: `systemctl stop openresty && systemctl disable openresty`.
 
 ## Install
index 40d9e44e472dcff01a75426a4aa0805ef5135d52..775ceb5b36b0220272851d73088ffd593a72c3f2 100644 (file)
@@ -214,6 +214,48 @@ brew services start etcd
 
 ## Next steps
 
+### Configuring APISIX
+
+You can configure your APISIX deployment in two ways:
+
+1. By directly changing your configuration file (`conf/config.yaml`).
+2. By using the `--config` or the `-c` flag to pass the path to your configuration file while starting APISIX.
+
+   ```shell
+   apisix start -c <path to config file>
+   ```
+
+APISIX will use the configurations added in this configuration file and will fall back to the default configuration if anything is not configured.
+
+For example, to configure the default listening port to be `8000` without changing other configurations, your configuration file could look like this:
+
+```yaml title="conf/config.yaml"
+apisix:
+  node_listen: 8000
+```
+
+Now, if you decide you want to change the etcd address to `http://foo:2379`, you can add it to your configuration file. This will not change other configurations.
+
+```yaml title="conf/config.yaml"
+apisix:
+  node_listen: 8000
+
+etcd:
+  host: "http://foo:2379"
+```
+
+:::warning
+
+APISIX's default configuration can be found in `conf/config-default.yaml` file and it should not be modified. It is bound to the source code and the configuration should only be changed by the methods mentioned above.
+
+:::
+
+:::warning
+
+The `conf/nginx.conf` file is automatically generated and should not be modified.
+
+:::
+
 ### Updating Admin API key
 
 It is recommended to modify the Admin API key to ensure security.
index 36dd31d6e6c180a68f068f17e7e1d1d8699cc3bb..5551534936b541a1a674033f4be4b3cd58dc529f 100644 (file)
@@ -351,7 +351,7 @@ ONLY:
 --- config
 ...
 --- response_body
-{"action":"get","count":0,"node":{"dir":true,"key":"/apisix/upstreams","nodes":[]}}
+{"count":0,"node":{"dir":true,"key":"/apisix/upstreams","nodes":[]}}
 ```
 
 ### Executing Shell Commands
index 294d4b162fbf324368f2023736a8aeed31adbe4b..33e0fb7f3923384d4fe99d6226bf296f4b70cf3b 100644 (file)
@@ -126,7 +126,7 @@ if len(sys.argv) >= 5:
         reqParam["client"]["ca"] = clientCert
     if len(sys.argv) >= 6:
         reqParam["client"]["depth"] = int(sys.argv[5])
-resp = requests.put("http://127.0.0.1:9080/apisix/admin/ssl/1", json=reqParam, headers={
+resp = requests.put("http://127.0.0.1:9080/apisix/admin/ssls/1", json=reqParam, headers={
     "X-API-KEY": api_key,
 })
 print(resp.status_code)
index 631130ad7f3cc2f50181febd7b27c24a96fd5c2a..c394372dc6f137a050b74a6982f83e4f4a6e754b 100644 (file)
@@ -516,7 +516,7 @@ The above test case represents a simple scenario. Most scenarios will require mu
 
 Additionally, there are some convenience testing endpoints which can be found [here](https://github.com/apache/apisix/blob/master/t/lib/server.lua#L36). For example, see [proxy-rewrite](https://github.com/apache/apisix/blob/master/t/plugin/proxy-rewrite.lua). In test 42, the upstream `uri` is made to redirect `/test?new_uri=hello` to `/hello` (which always returns `hello world`). In test 43, the response body is confirmed to equal `hello world`, meaning the proxy-rewrite configuration added with test 42 worked correctly.
 
-Refer the following [document](how-to-build.md) to setup the testing framework.
+Refer the following [document](building-apisix.md) to setup the testing framework.
 
 ### attach the test-nginx execution process:
 
index 4469b5a31d40276d8dc67754afadeb843b2bf444..2a519f7d35dd187fbe04f453919259b1169e0b87 100644 (file)
@@ -46,7 +46,7 @@ In an unhealthy state, if the Upstream service responds with a status code from
 | break_response_headers  | array[object]  | False    |         | [{"key":"header_name","value":"can contain Nginx $var"}] | Headers of the response message to return when Upstream is unhealthy. Can only be configured when the `break_response_body` attribute is configured. The values can contain APISIX variables. For example, we can use `{"key":"X-Client-Addr","value":"$remote_addr:$remote_port"}`. |
 | max_breaker_sec         | integer        | False    | 300     | >=3             | Maximum time in seconds for circuit breaking.                                                                                                                                                                                                |
 | unhealthy.http_statuses | array[integer] | False    | [500]   | [500, ..., 599] | Status codes of Upstream to be considered unhealthy.                                                                                                                                                                                         |
-| unhealthy.failures      | integer        | False    | 3       | >=1             | Number of consecutive failures for the Upstream service to be considered unhealthy.                                                                                                                                                          |
+| unhealthy.failures      | integer        | False    | 3       | >=1             | Number of failures within a certain period of time for the Upstream service to be considered unhealthy.                                                                                                                                                          |
 | healthy.http_statuses   | array[integer] | False    | [200]   | [200, ..., 499] | Status codes of Upstream to be considered healthy.                                                                                                                                                                                           |
 | healthy.successes       | integer        | False    | 3       | >=1             | Number of consecutive healthy requests for the Upstream service to be considered healthy.                                                                                                                                                    |
 
@@ -80,7 +80,7 @@ curl "http://127.0.0.1:9080/apisix/admin/routes/1" -H 'X-API-KEY: edd1c9f034335f
 }'
 ```
 
-In this configuration, a response code of 500 or 503 three times in a row triggers the unhealthy status of the Upstream service. A response code of 200 restores its healthy status.
+In this configuration, a response code of 500 or 503 three times within a certain period of time triggers the unhealthy status of the Upstream service. A response code of 200 restores its healthy status.
 
 ## Example usage
 
index e4a9c4f81c73a30acd5cb0d1679085b30f3e6f11..cda3e19246e3d2ffbdca154b806b94d61a2ca881 100644 (file)
@@ -87,12 +87,12 @@ This plugin will create an API endpoint in APISIX to handle batch requests.
 
 ### Request
 
-| Name     | Type                        | Required | Default | Description                   |
-| -------- | --------------------------- | -------- | ------- | ----------------------------- |
-| query    | object                      | False    |         | Query string for the request. |
-| headers  | object                      | False    |         | Headers for all the requests. |
-| timeout  | integer                     | False    | 30000   | Timeout in ms.                |
-| pipeline | [HttpRequest](#httprequest) | True     |         | Details of the request.       |
+| Name     | Type                               | Required | Default | Description                   |
+| -------- |------------------------------------| -------- | ------- | ----------------------------- |
+| query    | object                             | False    |         | Query string for the request. |
+| headers  | object                             | False    |         | Headers for all the requests. |
+| timeout  | integer                            | False    | 30000   | Timeout in ms.                |
+| pipeline | array[[HttpRequest](#httprequest)] | True     |         | Details of the request.       |
 
 #### HttpRequest
 
index 505a26cd31609f8b19b7de9c56c604e9058dc1fc..f0808b19b30f3ac9aba4ac8d243c58c90c70b95a 100644 (file)
@@ -35,7 +35,8 @@ The `clickhouse-logger` Plugin is used to push logs to [ClickHouse](https://clic
 
 | Name          | Type    | Required | Default             | Valid values | Description                                                    |
 |---------------|---------|----------|---------------------|--------------|----------------------------------------------------------------|
-| endpoint_addr | string  | True     |                     |              | ClickHouse endpoint.                                           |
+| endpoint_addr | Deprecated   | True     |                |              | Use `endpoint_addrs` instead. ClickHouse endpoints.            |
+| endpoint_addrs | array  | True     |                     |              | ClickHouse endpoints.                                          |
 | database      | string  | True     |                     |              | Name of the database to store the logs.                        |
 | logtable      | string  | True     |                     |              | Table name to store the logs.                                  |
 | user          | string  | True     |                     |              | ClickHouse username.                                           |
@@ -80,6 +81,7 @@ CREATE TABLE default.test (
   `host` String,
   `client_ip` String,
   `route_id` String,
+  `service_id` String,
   `@timestamp` String,
    PRIMARY KEY(`@timestamp`)
 ) ENGINE = MergeTree()
@@ -95,6 +97,7 @@ Now, if you run `select * from default.test;`, you will get the following row:
 
 ## Enabling the Plugin
 
+If multiple endpoints are configured, they will be written randomly.
 The example below shows how you can enable the Plugin on a specific Route:
 
 ```shell
@@ -106,7 +109,7 @@ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f13
                 "password": "a",
                 "database": "default",
                 "logtable": "test",
-                "endpoint_addr": "http://127.0.0.1:8123"
+                "endpoint_addrs": ["http://127.0.0.1:8123"]
             }
        },
       "upstream": {
index f142401ebc962fd69f62a54f76123bf72f7a530c..734ee20a8315990d7d74008c5fa0ae6021202a3d 100644 (file)
@@ -2,10 +2,9 @@
 title: client-control
 keywords:
   - APISIX
-  - Plugin
+  - API Gateway
   - Client Control
-  - client-control
-description: This document contains information about the Apache APISIX client-control Plugin.
+description: This document describes the Apache APISIX client-control Plugin, you can use it to control NGINX behavior to handle a client request dynamically.
 ---
 
 <!--
@@ -29,7 +28,7 @@ description: This document contains information about the Apache APISIX client-c
 
 ## Description
 
-The `client-control` Plugin can be used to dynamically control the behavior of Nginx to handle a client request.
+The `client-control` Plugin can be used to dynamically control the behavior of NGINX to handle a client request, by setting the max size of the request body.
 
 :::info IMPORTANT
 
@@ -41,14 +40,15 @@ This Plugin requires APISIX to run on APISIX-Base. See [apisix-build-tools](http
 
 | Name          | Type    | Required | Valid values | Description                                                                                                                          |
 | ------------- | ------- | -------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------ |
-| max_body_size | integer | False    | [0,...]      | Dynamically set the [client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) directive. |
+| max_body_size | integer | False    | [0,...]      | Dynamically set the [`client_max_body_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) directive. |
 
 ## Enabling the Plugin
 
 The example below enables the Plugin on a specific Route:
 
 ```shell
-curl -i http://127.0.0.1:9080/apisix/admin/routes/1  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl -i http://127.0.0.1:9080/apisix/admin/routes/1 \
+  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "uri": "/index.html",
     "plugins": {
@@ -90,7 +90,8 @@ HTTP/1.1 413 Request Entity Too Large
 To disable the `client-control` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload, and you do not have to restart for this to take effect.
 
 ```shell
-curl http://127.0.0.1:9080/apisix/admin/routes/1  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/routes/1  \
+  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "uri": "/index.html",
     "upstream": {
index e2c5dd06672a00c6566415dae5339c11b321d23b..f881a882455d1950d99d37dc52e33739a48f1a37 100644 (file)
@@ -2,10 +2,9 @@
 title: consumer-restriction
 keywords:
   - APISIX
-  - Plugin
+  - API Gateway
   - Consumer restriction
-  - consumer-restriction
-description: This document contains information about the Apache APISIX consumer-restriction Plugin.
+description: The Consumer Restriction Plugin allows users to set access restrictions based on Consumer, Route, or Service.
 ---
 
 <!--
@@ -33,14 +32,14 @@ The `consumer-restriction` Plugin allows users to set access restrictions based
 
 ## Attributes
 
-| Name               | Type          | Required | Default       | Valid values                                                                              | Description                                                                    |
-|--------------------|---------------|----------|---------------|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
-| type               | string        | False    | consumer_name | ["consumer_name", "service_id", "route_id"]                                               | Type of object to base the restriction on.                                     |
-| whitelist          | array[string] | True     |               |                                                                                           | List of objects to whitelist. Has a higher priority than `allowed_by_methods`. |
-| blacklist          | array[string] | True     |               |                                                                                           | List of objects to blacklist. Has a higher priority than `whitelist`.          |
-| rejected_code      | integer       | False    | 403           | [200,...]                                                                                 | HTTP status code returned when the request is rejected.                        |
-| rejected_msg       | string        | False    |               |                                                                                           | Message returned when the request is rejected.                                 |
-| allowed_by_methods | array[object] | False    |               | ["GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS", "CONNECT", "TRACE", "PURGE"] | List of allowed HTTP methods for a Consumer.                                   |
+| Name               | Type          | Required | Default       | Valid values  | Description |
+|--------------------|---------------|----------|---------------|---------------|-------------|
+| type               | string        | False    | consumer_name | ["consumer_name", "service_id", "route_id"]  | Type of object to base the restriction on.  |
+| whitelist          | array[string] | True     |               |                                              | List of objects to whitelist. Has a higher priority than `allowed_by_methods`. |
+| blacklist          | array[string] | True     |               |                                              | List of objects to blacklist. Has a higher priority than `whitelist`.          |
+| rejected_code      | integer       | False    | 403           | [200,...]                                    | HTTP status code returned when the request is rejected.                        |
+| rejected_msg       | string        | False    |               |                                              | Message returned when the request is rejected.                                 |
+| allowed_by_methods | array[object] | False    |               | ["GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS", "CONNECT", "TRACE", "PURGE"] | List of allowed HTTP methods for a Consumer. |
 
 :::note
 
@@ -115,7 +114,6 @@ curl -u jack2019:123456 http://127.0.0.1:9080/index.html
 
 ```shell
 HTTP/1.1 200 OK
-...
 ```
 
 And requests from `jack2` are blocked:
index 2205805daccb9664da0f8c6044fda92551de23f2..913f586689ba9392c6169066ac834f2cdcd30757 100644 (file)
@@ -2,7 +2,7 @@
 title: cors
 keywords:
   - APISIX
-  - Plugin
+  - API Gateway
   - CORS
 description: This document contains information about the Apache APISIX cors Plugin.
 ---
@@ -45,7 +45,8 @@ The `cors` Plugins lets you enable [CORS](https://developer.mozilla.org/en-US/do
 
 :::info IMPORTANT
 
-The `allow_credential` attribute is sensitive and must be used carefully. If set to `true` the default value `*` of the other attributes will be invalid and they should be specified explicitly. When using `**` you are vulnerable to security risks like CSRF. Make sure that this meets your security levels before using it.
+1. The `allow_credential` attribute is sensitive and must be used carefully. If set to `true` the default value `*` of the other attributes will be invalid and they should be specified explicitly.
+2. When using `**` you are vulnerable to security risks like CSRF. Make sure that this meets your security levels before using it.
 
 :::
 
index 3285f5121ce1738ea0d57b097a1e58af32eb3b5b..bb963d1874bf7f3e72832ed6a88a40ad155d360c 100644 (file)
@@ -5,7 +5,7 @@ keywords:
   - Plugin
   - Cross-site request forgery
   - csrf
-description: This document contains information about the Apache APISIX csrf Plugin.
+description: The CSRF Plugin can be used to protect your API against CSRF attacks using the Double Submit Cookie method.
 ---
 
 <!--
index f198de8e2ff194761b491e937f2478e665b165fb..21c6e42a12c71e5a67be542837a81d179fdf3d1d 100644 (file)
@@ -58,10 +58,10 @@ APISIX takes in an HTTP request, transcodes it and forwards it to a gRPC service
 
 Before enabling the Plugin, you have to add the content of your `.proto` or `.pb` files to APISIX.
 
-You can use the `/admin/proto/id` endpoint and add the contents of the file to the `content` field:
+You can use the `/admin/protos/id` endpoint and add the contents of the file to the `content` field:
 
 ```shell
-curl http://127.0.0.1:9080/apisix/admin/proto/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/protos/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "content" : "syntax = \"proto3\";
     package helloworld;
@@ -122,7 +122,7 @@ api_key = "edd1c9f034335f136f87ad84b625c8f1" # use a different API key
 reqParam = {
     "content": content,
 }
-resp = requests.put("http://127.0.0.1:9080/apisix/admin/proto/" + id, json=reqParam, headers={
+resp = requests.put("http://127.0.0.1:9080/apisix/admin/protos/" + id, json=reqParam, headers={
     "X-API-KEY": api_key,
 })
 print(resp.status_code)
@@ -145,7 +145,7 @@ Response:
 
 ```
 # 200
-# {"node":{"value":{"create_time":1643879753,"update_time":1643883085,"content":"CmgKEnByb3RvL2ltcG9ydC5wcm90bxIDcGtnIhoKBFVzZXISEgoEbmFtZRgBIAEoCVIEbmFtZSIeCghSZXNwb25zZRISCgRib2R5GAEgASgJUgRib2R5QglaBy4vcHJvdG9iBnByb3RvMwq9AQoPcHJvdG8vc3JjLnByb3RvEgpoZWxsb3dvcmxkGhJwcm90by9pbXBvcnQucHJvdG8iPAoHUmVxdWVzdBIdCgR1c2VyGAEgASgLMgkucGtnLlVzZXJSBHVzZXISEgoEYm9keRgCIAEoCVIEYm9keTI5CgpUZXN0SW1wb3J0EisKA1J1bhITLmhlbGxvd29ybGQuUmVxdWVzdBoNLnBrZy5SZXNwb25zZSIAQglaBy4vcHJvdG9iBnByb3RvMw=="},"key":"\/apisix\/proto\/1"},"action":"set"}
+# {"node":{"value":{"create_time":1643879753,"update_time":1643883085,"content":"CmgKEnByb3RvL2ltcG9ydC5wcm90bxIDcGtnIhoKBFVzZXISEgoEbmFtZRgBIAEoCVIEbmFtZSIeCghSZXNwb25zZRISCgRib2R5GAEgASgJUgRib2R5QglaBy4vcHJvdG9iBnByb3RvMwq9AQoPcHJvdG8vc3JjLnByb3RvEgpoZWxsb3dvcmxkGhJwcm90by9pbXBvcnQucHJvdG8iPAoHUmVxdWVzdBIdCgR1c2VyGAEgASgLMgkucGtnLlVzZXJSBHVzZXISEgoEYm9keRgCIAEoCVIEYm9keTI5CgpUZXN0SW1wb3J0EisKA1J1bhITLmhlbGxvd29ybGQuUmVxdWVzdBoNLnBrZy5SZXNwb25zZSIAQglaBy4vcHJvdG9iBnByb3RvMw=="},"key":"\/apisix\/proto\/1"}}
 ```
 
 Now, we can enable the `grpc-transcode` Plugin to a specific Route:
index 87ec78fa08d7f4b0341c36fb40478dff3c07feea..f14748006520b965aca700d2462cf2b5132d141d 100644 (file)
@@ -2,10 +2,10 @@
 title: http-logger
 keywords:
   - APISIX
+  - API 网关
   - Plugin
   - HTTP Logger
-  - http-logger
-description: This document contains information about the Apache APISIX http-logger Plugin.
+description: This document contains information about the Apache APISIX http-logger Plugin. Using this Plugin, you can push APISIX log data to HTTP or HTTPS servers.
 ---
 
 <!--
@@ -47,8 +47,12 @@ This will allow the ability to send log data requests as JSON objects to monitor
 | concat_method          | string  | False    | "json"        | ["json", "new_line"] | Sets how to concatenate logs. When set to `json`, uses `json.encode` for all pending logs and when set to `new_line`, also uses `json.encode` but uses the newline (`\n`) to concatenate lines.                          |
 | ssl_verify             | boolean | False    | false         | [false, true]        | When set to `true` verifies the SSL certificate.                                                                                                                                                                         |
 
+:::note
+
 This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
 
+:::
+
 ## Metadata
 
 You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
@@ -66,7 +70,8 @@ Configuring the Plugin metadata is global in scope. This means that it will take
 The example below shows how you can configure through the Admin API:
 
 ```shell
-curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/http-logger -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/http-logger \
+-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "log_format": {
         "host": "$host",
@@ -88,7 +93,8 @@ With this configuration, your logs would be formatted as shown below:
 The example below shows how you can enable the Plugin on a specific Route:
 
 ```shell
-curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/routes/1 \
+-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
       "plugins": {
             "http-logger": {
@@ -117,7 +123,7 @@ curl -i http://127.0.0.1:9080/hello
 
 ## Disable Plugin
 
-To disable the `http-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
+To disable this Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
 
 ```shell
 curl http://127.0.0.1:9080/apisix/admin/routes/1  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
index 3303f6f48d807b56e1f7db30c1b0300c1794377b..facc9a42ae7f876603a229f0c310f9cf0309a6cd 100644 (file)
@@ -39,7 +39,7 @@ Single IPs, multiple IPs or even IP ranges in CIDR notation like `10.10.10.0/24`
 |-----------|---------------|----------|---------------------------------|--------------|-------------------------------------------------------------|
 | whitelist | array[string] | False    |                                 |              | List of IPs or CIDR ranges to whitelist.                    |
 | blacklist | array[string] | False    |                                 |              | List of IPs or CIDR ranges to blacklist.                    |
-| message   | string        | False    | Your IP address is not allowed. | [1, 1024]    | Message returned when the IP address is not allowed access. |
+| message   | string        | False    | "Your IP address is not allowed" | [1, 1024]    | Message returned when the IP address is not allowed access. |
 
 :::note
 
index f6591a1994b14df697eb48af4649fee81ca69f6b..353ddd3a264931da1b4c1812befbb56e5363c9dd 100644 (file)
@@ -43,12 +43,13 @@ For Consumer:
 |---------------|---------|-------------------------------------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | key           | string  | True                                                  |         |                             | Unique key for a Consumer.                                                                                                                                                                  |
 | secret        | string  | False                                                 |         |                             | The encryption key. If unspecified, auto generated in the background.                                                                                                                       |
-| public_key    | string  | True if `RS256` is set for the `algorithm` attribute. |         |                             | RSA public key.                                                                                                                                                                             |
-| private_key   | string  | True if `RS256` is set for the `algorithm` attribute. |         |                             | RSA private key.                                                                                                                                                                            |
-| algorithm     | string  | False                                                 | "HS256" | ["HS256", "HS512", "RS256"] | Encryption algorithm.                                                                                                                                                                       |
+| public_key    | string  | True if `RS256` or `ES256` is set for the `algorithm` attribute. |         |                             | RSA or ECDSA public key.                                                                                                                                                                             |
+| private_key   | string  | True if `RS256` or `ES256` is set for the `algorithm` attribute. |         |                             | RSA or ECDSA private key.                                                                                                                                                                            |
+| algorithm     | string  | False                                                 | "HS256" | ["HS256", "HS512", "RS256", "ES256"] | Encryption algorithm.                                                                                                                                                                       |
 | exp           | integer | False                                                 | 86400   | [1,...]                     | Expiry time of the token in seconds.                                                                                                                                                        |
 | base64_secret | boolean | False                                                 | false   |                             | Set to true if the secret is base64 encoded.                                                                                                                                                |
-| vault         | object  | False                                                 |         |                             | Set to true to use Vault for storing and retrieving secret (secret for HS256/HS512  or public_key and private_key for RS256). By default, the Vault path is `kv/apisix/consumer/<consumer_name>/jwt-auth`. |
+| vault         | object  | False                                                 |         |                             | Set to true to use Vault for storing and retrieving secret (secret for HS256/HS512  or public_key and private_key for RS256/ES256). By default, the Vault path is `kv/apisix/consumer/<consumer_name>/jwt-auth`. |
+| lifetime_grace_period | integer | False                                         | 0       | [0,...]                     | Define the leeway in seconds to account for clock skew between the server that generated the jwt and the server validating it. Value should be zero (0) or a positive integer. |
 
 :::info IMPORTANT
 
index 4bc86401dae1d7e7b8bc3cd66d1161ad17239991..b9aa130eb1e8a1c039d75dde9fddca3c1996d56b 100644 (file)
@@ -33,7 +33,7 @@ The `ldap-auth` Plugin can be used to add LDAP authentication to a Route or a Se
 
 This Plugin works with the Consumer object and the consumers of the API can authenticate with an LDAP server using [basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
 
-This Plugin uses [lualdap](https://lualdap.github.io/lualdap/) for connecting with an LDAP server.
+This Plugin uses [lua-resty-ldap](https://github.com/api7/lua-resty-ldap) for connecting with an LDAP server.
 
 ## Attributes
 
@@ -49,7 +49,8 @@ For Route:
 |----------|---------|----------|---------|------------------------------------------------------------------------|
 | base_dn  | string  | True     |         | Base dn of the LDAP server. For example, `ou=users,dc=example,dc=org`. |
 | ldap_uri | string  | True     |         | URI of the LDAP server.                                                |
-| use_tls  | boolean | False    | `true`  | If set to `true` uses TLS.                                             |
+| use_tls  | boolean | False    | `false` | If set to `true` uses TLS.                                             |
+| tls_verify| boolean  | False     | `false`        | Whether to verify the server certificate when `use_tls` is enabled; If set to `true`, you must set `ssl_trusted_certificate` in `config.yaml`, and make sure the host of `ldap_uri` matches the host in server certificate. |
 | uid      | string  | False    | `cn`    | uid attribute.                                                         |
 
 ## Enabling the plugin
index 508ed1c58cdbf391ce25508107edde98eeeafd7a..e004be95031663b8b86a2d4dfcaf645b7329cc74 100644 (file)
@@ -141,3 +141,76 @@ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f13
     }
 }'
 ```
+
+## Limit the number of concurrent WebSocket connections
+
+Apache APISIX supports WebSocket proxy, we can use `limit-conn` plugin to limit the number of concurrent WebSocket connections.
+
+1. Create a Route, enable the WebSocket proxy and the `limit-conn` plugin.
+
+````shell
+curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+    "uri": "/ws",
+    "enable_websocket": true,
+    "plugins": {
+        "limit-conn": {
+            "conn": 1,
+            "burst": 0,
+            "default_conn_delay": 0.1,
+            "rejected_code": 503,
+            "key_type": "var",
+            "key": "remote_addr"
+        }
+    },
+    "upstream": {
+        "type": "roundrobin",
+        "nodes": {
+            "127.0.0.1:1980": 1
+        }
+    }
+}'
+````
+
+The above route enables the WebSocket proxy on `/ws`, and limits the number of concurrent WebSocket connections to 1. More than 1 concurrent WebSocket connection will return `503` to reject the request.
+
+2. Initiate a WebSocket request, and the connection is established successfully
+
+````shell
+curl --include \
+     --no-buffer \
+     --header "Connection: Upgrade" \
+     --header "Upgrade: websocket" \
+     --header "Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==" \
+     --header "Sec-WebSocket-Version: 13" \
+     --http1.1 \
+     http://127.0.0.1:9080/ws
+```
+
+```shell
+HTTP/1.1 101 Switching Protocols
+Connection: upgrade
+Upgrade: websocket
+Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
+Server: APISIX/2.15.0
+...
+````
+
+3. Initiate the WebSocket request again in another terminal, the request will be rejected
+
+````shell
+HTTP/1.1 503 Service Temporarily Unavailable
+Date: Mon, 01 Aug 2022 03:49:17 GMT
+Content-Type: text/html; charset=utf-8
+Content-Length: 194
+Connection: keep-alive
+Server: APISIX/2.15.0
+
+<html>
+<head><title>503 Service Temporarily Unavailable</title></head>
+<body>
+<center><h1>503 Service Temporarily Unavailable</h1></center>
+<hr><center>openresty</center>
+</body>
+</html>
+````
index 5760aa5577fb5fc37f990fbfe95afdc2465965ca..031f6f84556a18cc072622e9b0f153bafb163d09 100644 (file)
@@ -2,10 +2,10 @@
 title: limit-req
 keywords:
   - APISIX
-  - Plugin
+  - API Gateway
   - Limit Request
   - limit-req
-description: This document contains information about the Apache APISIX limit-req Plugin.
+description: The limit-req Plugin limits the number of requests to your service using the leaky bucket algorithm.
 ---
 
 <!--
@@ -38,7 +38,7 @@ The `limit-req` Plugin limits the number of requests to your service using the [
 | rate              | integer | True     |         | rate > 0                   | Threshold for number of requests per second. Requests exceeding this rate (and below `burst`) will be delayed to match this rate.                                                                                                                                                                                                                                                                     |
 | burst             | integer | True     |         | burst >= 0                 | Number of additional requests allowed to be delayed per second. If the number of requests exceeds this hard limit, they will get rejected immediately.                                                                                                                                                                                                                                                |
 | key_type          | string  | False    | "var"   | ["var", "var_combination"] | Type of user specified key to use.                                                                                                                                                                                                                                                                                                                                                                    |
-| key               | string  | True     |         |                            | User specified key to base the request limiting on. If the `key_type` attribute is set to `var`, the key will be treated as a name of variable, like `remote_addr` or `consumer_name`. If the `key_type` is set to `var_combination`, the key will be a combination of variables, like `$remote_addr $consumer_name`. If the value of the key is empty, `remote_addr` will be set as the default key. |
+| key               | string  | True     |         |  ["remote_addr", "server_addr", "http_x_real_ip", "http_x_forwarded_for", "consumer_name"] | User specified key to base the request limiting on. If the `key_type` attribute is set to `var`, the key will be treated as a name of variable, like `remote_addr` or `consumer_name`. If the `key_type` is set to `var_combination`, the key will be a combination of variables, like `$remote_addr $consumer_name`. If the value of the key is empty, `remote_addr` will be set as the default key. |
 | rejected_code     | integer | False    | 503     | [200,...,599]              | HTTP status code returned when the requests exceeding the threshold are rejected.                                                                                                                                                                                                                                                                                                                     |
 | rejected_msg      | string  | False    |         | non-empty                  | Body of the response returned when the requests exceeding the threshold are rejected.                                                                                                                                                                                                                                                                                                                 |
 | nodelay           | boolean | False    | false   |                            | If set to `true`, requests within the burst threshold would not be delayed.                                                                                                                                                                                                                                                                                                                           |
@@ -48,7 +48,7 @@ The `limit-req` Plugin limits the number of requests to your service using the [
 
 You can enable the Plugin on a Route as shown below:
 
-```bash
+```shell
 curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "methods": ["GET"],
@@ -99,7 +99,7 @@ You can also configure the Plugin on specific consumers to limit their requests.
 
 First, you can create a Consumer and enable the `limit-req` Plugin on it:
 
-```bash
+```shell
 curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "username": "consumer_jack",
@@ -121,7 +121,7 @@ In this example, the [key-auth](./key-auth.md) Plugin is used to authenticate th
 
 Next, create a Route and enable the `key-auth` Plugin:
 
-```bash
+```shell
 curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "methods": ["GET"],
@@ -146,13 +146,13 @@ Once you have configured the Plugin as shown above, you can test it out. The abo
 
 Now if you send a request:
 
-```bash
+```shell
 curl -i http://127.0.0.1:9080/index.html
 ```
 
 For authenticated requests:
 
-```bash
+```shell
 curl -i http://127.0.0.1:9080/index.html -H 'apikey: auth-jack'
 ```
 
@@ -176,7 +176,7 @@ Server: APISIX web server
 
 You can set a custom rejected message by configuring the `rejected_msg` attribute. You will then receive a response like:
 
-```bash
+```shell
 HTTP/1.1 503 Service Temporarily Unavailable
 Content-Type: text/html
 Content-Length: 194
@@ -190,7 +190,7 @@ Server: APISIX web server
 
 To disable the `limit-req` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
 
-```bash
+```shell
 curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "methods": ["GET"],
@@ -209,7 +209,7 @@ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f13
 
 Similarly for removing the Plugin from a Consumer:
 
-```bash
+```shell
 curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "username": "consumer_jack",
index ce49a79e0c018732049576a6c1eaebbed2735a75..6b5dd18b43fc7ceae6a1d59493b7607258167e7b 100644 (file)
@@ -5,7 +5,7 @@ keywords:
   - API Gateway
   - Plugin
   - MQTT Proxy
-description: This document contains information about the Apache APISIX mqtt-proxy Plugin.
+description: This document contains information about the Apache APISIX mqtt-proxy Plugin. The `mqtt-proxy` Plugin is used for dynamic load balancing with `client_id` of MQTT.
 ---
 
 <!--
@@ -123,6 +123,39 @@ curl http://127.0.0.1:9080/apisix/admin/stream_routes/1 -H 'X-API-KEY: edd1c9f03
 
 MQTT connections with different client ID will be forwarded to different nodes based on the consistent hash algorithm. If client ID is missing, client IP is used instead for load balancing.
 
+## Enabling mTLS with mqtt-proxy plugin
+
+Stream proxies use TCP connections and can accept TLS. Follow the guide about [how to accept tls over tcp connections](../stream-proxy.md/#accept-tls-over-tcp-connection) to open a stream proxy with enabled TLS.
+
+The `mqtt-proxy` plugin is enabled through TCP communications on the specified port for the stream proxy, and will also require clients to authenticate via TLS if `tls` is set to `true`.
+
+Configure `ssl` providing the CA certificate and the server certificate, together with a list of SNIs. Steps to protect `stream_routes` with `ssl` are equivalent to the ones to [protect Routes](../mtls.md/#protect-route).
+
+### Create a stream_route using mqtt-proxy plugin and mTLS
+
+Here is an example of how create a stream_route which is using the `mqtt-proxy` plugin, providing the CA certificate, the client certificate and the client key (for self-signed certificates which are not trusted by your host, use the `-k` flag):
+
+```shell
+curl 127.0.0.1:9180/apisix/admin/stream_routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+    "plugins": {
+        "mqtt-proxy": {
+            "protocol_name": "MQTT",
+            "protocol_level": 4
+        }
+    },
+    "sni": "${your_sni_name}",
+    "upstream": {
+        "nodes": {
+            "127.0.0.1:1980": 1
+        },
+        "type": "roundrobin"
+    }
+}'
+```
+
+The `sni` name must match one or more of the SNIs provided to the SSL object that you created with the CA and server certificates.
+
 ## Disable Plugin
 
 To disable the `mqtt-proxy` Plugin you can remove the corresponding configuration as shown below:
index ad0c681cc79b5992db20935e67fb4f11d544d0ed..353b455871f0aa65056f3101efc29d8c639f0b8e 100644 (file)
@@ -71,7 +71,7 @@ curl http://127.0.0.1:9080/apisix/admin/routes/ns -H 'X-API-KEY: edd1c9f034335f1
 Once you have configured the Plugin, you can make a request to the `apisix/status` endpoint to get the status:
 
 ```shell
-curl localhost:9080/apisix/status -i
+curl http://127.0.0.1:9080/apisix/status -i
 ```
 
 ```shell
@@ -103,7 +103,7 @@ The parameters in the response are described below:
 
 To remove the Plugin, you can remove it from your configuration file (`conf/config.yaml`):
 
-```
+```yaml title="conf/config.yaml"
 plugins:
   - example-plugin
   - limit-req
@@ -112,24 +112,8 @@ plugins:
   ......
 ```
 
-To disable the `node-status` Plugin on a Route, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
-
-```sh
-curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
-{
-    "uri": "/route1",
-    "upstream": {
-        "type": "roundrobin",
-        "nodes": {
-            "192.168.1.100:80": 1
-        }
-    },
-    "plugins": {}
-}'
-```
-
 You can also remove the Route on `/apisix/status`:
 
-```sh
+```shell
 curl http://127.0.0.1:9080/apisix/admin/routes/ns -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X DELETE
 ```
index 5b33e5d53ad0672a360b7542ba44ea16601b173f..e1cdae8d3ab3dfc8df3a0471bb5dc12eb5ff4f80 100644 (file)
@@ -2,10 +2,10 @@
 title: openid-connect
 keywords:
   - APISIX
-  - Plugin
+  - API Gateway
   - OpenID Connect
-  - openid-connect
-description: This document contains information about the Apache APISIX openid-connect Plugin.
+  - OIDC
+description: OpenID Connect allows the client to obtain user information from the identity providers, such as Keycloak, Ory Hydra, Okta, Auth0, etc. API Gateway APISIX supports to integrate with the above identity providers to protect your APIs.
 ---
 
 <!--
@@ -29,49 +29,53 @@ description: This document contains information about the Apache APISIX openid-c
 
 ## Description
 
-The `openid-connect` Plugin provides authentication and introspection capability to APISIX with [OpenID Connect](https://openid.net/connect/).
+[OpenID Connect](https://openid.net/connect/) (OIDC) is an authentication protocol based on the OAuth 2.0. It allows the client to obtain user information from the identity provider (IdP), e.g., Keycloak, Ory Hydra, Okta, Auth0, etc. API Gateway Apache APISIX supports to integrate with the above identity providers to protect your APIs.
 
 ## Attributes
 
-| Name                                 | Type    | Required | Default               | Valid values | Description                                                                                                        |
-|--------------------------------------|---------|----------|-----------------------|--------------|--------------------------------------------------------------------------------------------------------------------|
-| client_id                            | string  | True     |                       |              | OAuth client ID.                                                                                                   |
-| client_secret                        | string  | True     |                       |              | OAuth client secret.                                                                                               |
-| discovery                            | string  | True     |                       |              | Discovery endpoint URL of the identity server.                                                                     |
-| scope                                | string  | False    | "openid"              |              | Scope used for authentication.                                                                                     |
-| realm                                | string  | False    | "apisix"              |              | Realm used for authentication.                                                                                     |
-| bearer_only                          | boolean | False    | false                 |              | When set to true, the Plugin will check for if the authorization header in the request matches a bearer token.     |
-| logout_path                          | string  | False    | "/logout"             |              | Path for logging out.                                                                                              |
-| post_logout_redirect_uri             | string  | False    |                       |              | URL to redirect to after logging out.                                                                              |
-| redirect_uri                         | string  | False    | "ngx.var.request_uri" |              | URI to which the identity provider redirects back to.                                                              |
-| timeout                              | integer | False    | 3                     | [1,...]      | Request timeout time in seconds.                                                                                   |
-| ssl_verify                           | boolean | False    | false                 |              | When set to true, verifies the identity provider's SSL certificates.                                               |
-| introspection_endpoint               | string  | False    |                       |              | URL of the token verification endpoint of the identity server.                                                     |
-| introspection_endpoint_auth_method   | string  | False    | "client_secret_basic" |              | Authentication method name for token introspection.                                                                |
-| public_key                           | string  | False    |                       |              | Public key to verify the token.                                                                                    |
-| use_jwks                             | boolean | False    |                       |              | When set to true, uses the JWKS endpoint of the identity server to verify the token.                               |
-| token_signing_alg_values_expected    | string  | False    |                       |              | Algorithm used for signing the authentication token.                                                               |
-| set_access_token_header              | boolean | False    | true                  |              | When set to true, sets the access token in a request header.                                                       |
-| access_token_in_authorization_header | boolean | False    | false                 |              | When set to true, sets the access token in the `Authorization` header. Otherwise, set the `X-Access-Token` header. |
-| set_id_token_header                  | boolean | False    | true                  |              | When set to true and the ID token is available, sets the ID token in the `X-ID-Token` request header.              |
-| set_userinfo_header                  | boolean | False    | true                  |              | When set to true and the UserInfo object is available, sets it in the `X-Userinfo` request header.                 |
-| set_refresh_token_header                  | boolean | False    | false                  |              | When set to true and a refresh token object is available, sets it in the `X-Refresh-Token` request header.                 |
-
-## Modes of operation
-
-The `openid-connect` Plugin offers three modes of operation:
-
-1. The Plugin can be configured to just validate an access token that is expected to be present in a request header. In such cases, requests without a token or with an invalid token are rejected. This requires the `bearer_only` attribute to be set to `true` and either `introspection_endpoint` or `public_key` attribute to be configured. This mode of operation can be used for service-to-service communication where the requester can reasonably be expected to obtain and manage a valid token by itself.
-
-2. The Plugin can be configured to authenticate requests without a valid token against an identity provider through OIDC authorization. The Plugin then acts as an OIDC Relying Party. In such cases, after successful authentication, the Plugin obtains and manages an access token in a session cookie. Subsequent requests that contain the cookie will use the access token. This requires the `bearer_only` attribute to be set to `false`. This mode of operation can be used to support cases where the client or the requester is a human interacting through a web browser.
-
-3. The Plugin can also be configured to support both the scenarios by setting `bearer_only` to `false` and also configuring either the `introspection_endpoint` or `public_key` attribute. In such cases, introspection of an existing token from a request header takes precedence over the Relying Party flow. That is, if a request contains an invalid token, it will be rejected without redirecting to the ID provider to obtain a valid token.
-
-The method used to authenticate a request also affects the headers that can be enforced on the request before sending it to an Upstream service. You can learn more about this on the sections below.
+| Name                                 | Type    | Required | Default               | Valid values | Description                                                                                                              |
+|--------------------------------------|---------|----------|-----------------------|--------------|--------------------------------------------------------------------------------------------------------------------------|
+| client_id                            | string  | True     |                       |              | OAuth client ID.                                                                                                         |
+| client_secret                        | string  | True     |                       |              | OAuth client secret.                                                                                                     |
+| discovery                            | string  | True     |                       |              | Discovery endpoint URL of the identity server.                                                                           |
+| scope                                | string  | False    | "openid"              |              | Scope used for authentication.                                                                                           |
+| realm                                | string  | False    | "apisix"              |              | Realm used for authentication.                                                                                           |
+| bearer_only                          | boolean | False    | false                 |              | When set to `true`, APISIX will only check if the authorization header in the request matches a bearer token.           |
+| logout_path                          | string  | False    | "/logout"             |              | Path for logging out.                                                                                                    |
+| post_logout_redirect_uri             | string  | False    |                       |              | URL to redirect to after logging out.                                                                                    |
+| redirect_uri                         | string  | False    | "ngx.var.request_uri" |              | URI to which the identity provider redirects back to.                                                                    |
+| timeout                              | integer | False    | 3                     | [1,...]      | Request timeout time in seconds.                                                                                         |
+| ssl_verify                           | boolean | False    | false                 |              | When set to true, verifies the identity provider's SSL certificates.                                                     |
+| introspection_endpoint               | string  | False    |                       |              | URL of the token verification endpoint of the identity server.                                                           |
+| introspection_endpoint_auth_method   | string  | False    | "client_secret_basic" |              | Authentication method name for token introspection.                                                                      |
+| token_endpoint_auth_method           | string  | False    |                       |              | Authentication method name for token endpoint. The default will get the first supported method specified by the OP.      |
+| public_key                           | string  | False    |                       |              | Public key to verify the token.                                                                                          |
+| use_jwks                             | boolean | False    | false                 |              | When set to `true`, uses the JWKS endpoint of the identity server to verify the token.                                   |
+| use_pkce                             | boolean | False    | false                 |              | when set to `true`, the "Proof Key for Code Exchange" as defined in RFC 7636 will be used.   |
+| token_signing_alg_values_expected    | string  | False    |                       |              | Algorithm used for signing the authentication token.                                                                     |
+| set_access_token_header              | boolean | False    | true                  |              | When set to true, sets the access token in a request header.                                                             |
+| access_token_in_authorization_header | boolean | False    | false                 |              | When set to true, sets the access token in the `Authorization` header. Otherwise, set the `X-Access-Token` header.       |
+| set_id_token_header                  | boolean | False    | true                  |              | When set to true and the ID token is available, sets the ID token in the `X-ID-Token` request header.                    |
+| set_userinfo_header                  | boolean | False    | true                  |              | When set to true and the UserInfo object is available, sets it in the `X-Userinfo` request header.                       |
+| set_refresh_token_header             | boolean | False    | false                 |              | When set to true and a refresh token object is available, sets it in the `X-Refresh-Token` request header.               |
+
+## Scenarios
+
+:::tip
+
+Tutorial: [Use Keycloak with API Gateway to secure APIs](https://apisix.apache.org/blog/2022/07/06/use-keycloak-with-api-gateway-to-secure-apis/)
+
+:::
+
+This plugin offers two scenorios:
+
+1. Authentication between Services: Set `bearer_only` to `true` and configure the `introspection_endpoint` or `public_key` attribute. In this scenario, APISIX will reject requests without a token or invalid token in the request header.
+
+2. Authentication between Browser and Identity Providers: Set `bearer_only` to `false.` After successful authentication, this plugin can obtain and manage the token in the cookie, and subsequent requests will use the token.
 
 ### Token introspection
 
-Token introspection validates a request by verifying the token with an OAuth 2 authorization server.
+[Token introspection](https://www.oauth.com/oauth2-servers/token-introspection-endpoint/) validates a request by verifying the token with an OAuth 2.0 authorization server.
 
 You should first create a trusted client in the identity server and generate a valid JWT token for introspection.
 
@@ -85,24 +89,21 @@ The example below shows how you can enable the Plugin on Route. The Rouet below
 curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
   "uri": "/get",
-  "plugins": {
-    "proxy-rewrite": {
-      "scheme": "https"
-    },
-    "openid-connect": {
-      "client_id": "api_six_client_id",
-      "client_secret": "client_secret_code",
-      "discovery": "full_URL_of_the_discovery_endpoint",
-      "introspection_endpoint": "full_URL_of_introspection_endpoint",
+  "plugins":{
+    "openid-connect":{
+      "client_id": "${CLIENT_ID}",
+      "client_secret": "${CLIENT_SECRET}",
+      "discovery": "${DISCOVERY_ENDPOINT}",
+      "introspection_endpoint": "${INTROSPECTION_ENDPOINT}",
       "bearer_only": true,
       "realm": "master",
       "introspection_endpoint_auth_method": "client_secret_basic"
     }
   },
-  "upstream": {
+  "upstream":{
     "type": "roundrobin",
-    "nodes": {
-      "httpbin.org:443": 1
+    "nodes":{
+      "httpbin.org:443":1
     }
   }
 }'
@@ -111,12 +112,12 @@ curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY: edd1c9f034335f13
 Now, to access the Route:
 
 ```bash
-curl -i -X GET http://127.0.0.1:9080/get -H "Host: httpbin.org" -H "Authorization: Bearer {replace_jwt_token}"
+curl -i -X GET http://127.0.0.1:9080/get -H "Host: httpbin.org" -H "Authorization: Bearer {JWT_TOKEN}"
 ```
 
 In this example, the Plugin enforces that the access token and the Userinfo object be set in the request headers.
 
-When the OAuth 2 authorization server returns an expire time with the token, it is cached in APISIX until expiry. For more details, read:
+When the OAuth 2.0 authorization server returns an expire time with the token, it is cached in APISIX until expiry. For more details, read:
 
 1. [lua-resty-openidc](https://github.com/zmartzone/lua-resty-openidc)'s documentation and source code.
 2. `exp` field in the RFC's [Introspection Response](https://tools.ietf.org/html/rfc7662#section-2.2) section.
@@ -131,26 +132,23 @@ The example below shows how you can add public key introspection to a Route:
 curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
   "uri": "/get",
-  "plugins": {
-    "proxy-rewrite": {
-      "scheme": "https"
-    },
-    "openid-connect": {
-      "client_id": "api_six_client_id",
-      "client_secret": "client_secret_code",
-      "discovery": "full_URL_of_the_discovery_endpoint",
+  "plugins":{
+    "openid-connect":{
+      "client_id": "${CLIENT_ID}",
+      "client_secret": "${CLIENT_SECRET}",
+      "discovery": "${DISCOVERY_ENDPOINT}",
       "bearer_only": true,
       "realm": "master",
       "token_signing_alg_values_expected": "RS256",
-      "public_key" : "-----BEGIN PUBLIC KEY-----
-        {public_key}
-        -----END PUBLIC KEY-----"
-}
+      "public_key": "-----BEGIN PUBLIC KEY-----
+      {public_key}
+      -----END PUBLIC KEY-----"
+    }
   },
-  "upstream": {
+  "upstream":{
     "type": "roundrobin",
-    "nodes": {
-      "httpbin.org:443": 1
+    "nodes":{
+      "httpbin.org:443":1
     }
   }
 }'
@@ -171,16 +169,13 @@ curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY: edd1c9f034335f13
 {
   "uri": "/get",
   "plugins": {
-    "proxy-rewrite": {
-      "scheme": "https"
-    },
     "openid-connect": {
-      "client_id": "api_six_client_id",
-      "client_secret": "client_secret_code",
-      "discovery": "full_URL_of_the_discovery_endpoint",
+      "client_id": "${CLIENT_ID}",
+      "client_secret": "${CLIENT_SECRET}",
+      "discovery": "${DISCOVERY_ENDPOINT}",
       "bearer_only": false,
       "realm": "master"
-}
+    }
   },
   "upstream": {
     "type": "roundrobin",
@@ -195,4 +190,11 @@ In this example, the Plugin can enforce that the access token, the ID token, and
 
 ## Troubleshooting
 
-If APISIX cannot resolve/connect to the identity provider, check/modify the DNS settings in your configuration file (`conf/config.yaml`).
+1. If APISIX cannot resolve/connect to the identity provider (e.g., Okta, Keycloak, Authing), check/modify the DNS settings in your configuration file (`conf/config.yaml`).
+
+2. If you encounter the error `the error request to the redirect_uri path, but there's no session state found,` please confirm whether the currently accessed URL carries `code` and `state,` and do not directly access `redirect_uri.`
+
+2. If you encounter the error `the error request to the redirect_uri path, but there's no session state found`, please check the `redirect_uri` attribute : APISIX will initiate an authentication request to the identity provider, after the authentication service completes the authentication and authorization logic, it will redirect to the address configured by `redirect_uri` (e.g., `http://127.0.0.1:9080/callback`) with ID Token and AccessToken, and then enter APISIX again and complete the function of token exchange in OIDC logic. The `redirect_uri` attribute needs to meet the following conditions:
+
+- `redirect_uri` needs to be captured by the route where the current APISIX is located. For example, the `uri` of the current route is `/api/v1/*`, `redirect_uri` can be filled in as `/api/v1/callback`;
+- `scheme` and `host` of `redirect_uri` (`scheme:host`) are the values required to access APISIX from the perspective of the identity provider.
index fc60565adb5846c379eed6f06716593481151961..7de452ea5a0b78b55c63166dbcc879918c49db27 100644 (file)
@@ -69,6 +69,10 @@ docker run --rm -d \
 docker exec openwhisk waitready
 ```
 
+Install the [openwhisk-cli](https://github.com/apache/openwhisk-cli) utility.
+
+You can download the released executable binaries wsk for Linux systems from the [openwhisk-cli](https://github.com/apache/openwhisk-cli) repository.
+
 You can then create an action to test:
 
 ```shell
index 0e02aa6bb4b951a98ad592428bb4ab7df9cc8397..4b5e97a5263d52d32e558230c139c40da2e7aaf9 100644 (file)
@@ -53,6 +53,29 @@ plugin_attr:
     export_uri: /apisix/metrics
 ```
 
+### Specifying `metrics`
+
+For http request related metrics, you could specify extra labels, which match the APISIX variables.
+
+If you specify label for nonexist APISIX variable, the label value would be "".
+
+Currently, only below metrics are supported:
+
+* http_status
+* http_latency
+* bandwidth
+
+Here is a configuration example:
+
+```yaml title="conf/config.yaml"
+plugin_attr:
+  prometheus:
+    metrics:
+        http_status:
+            extra_labels:
+                - upstream_addr: $upstream_addr
+                - upstream_status: $upstream_status
+
 ## API
 
 This Plugin will add the API endpoint `/apisix/prometheus/metrics` or your custom export URI for exposing the metrics.
@@ -201,6 +224,7 @@ The following metrics are exported by the `prometheus` Plugin:
   | node     | IP address of the Upstream node.                                                                                                    |
 
 - Info: Information about the APISIX node.
+- Shared dict: The capacity and free space of all nginx.shared.DICT in APISIX.
 
 Here are the original metrics from APISIX:
 
@@ -272,6 +296,24 @@ apisix_http_latency_bucket{type="upstream",route="1",service="",consumer="",node
 # HELP apisix_node_info Info of APISIX node
 # TYPE apisix_node_info gauge
 apisix_node_info{hostname="desktop-2022q8f-wsl"} 1
+# HELP apisix_shared_dict_capacity_bytes The capacity of each nginx shared DICT since APISIX start
+# TYPE apisix_shared_dict_capacity_bytes gauge
+apisix_shared_dict_capacity_bytes{name="access-tokens"} 1048576
+apisix_shared_dict_capacity_bytes{name="balancer-ewma"} 10485760
+apisix_shared_dict_capacity_bytes{name="balancer-ewma-last-touched-at"} 10485760
+apisix_shared_dict_capacity_bytes{name="balancer-ewma-locks"} 10485760
+apisix_shared_dict_capacity_bytes{name="discovery"} 1048576
+apisix_shared_dict_capacity_bytes{name="etcd-cluster-health-check"} 10485760
+...
+# HELP apisix_shared_dict_free_space_bytes The free space of each nginx shared DICT since APISIX start
+# TYPE apisix_shared_dict_free_space_bytes gauge
+apisix_shared_dict_free_space_bytes{name="access-tokens"} 1032192
+apisix_shared_dict_free_space_bytes{name="balancer-ewma"} 10412032
+apisix_shared_dict_free_space_bytes{name="balancer-ewma-last-touched-at"} 10412032
+apisix_shared_dict_free_space_bytes{name="balancer-ewma-locks"} 10412032
+apisix_shared_dict_free_space_bytes{name="discovery"} 1032192
+apisix_shared_dict_free_space_bytes{name="etcd-cluster-health-check"} 10412032
+...
 ```
 
 ## Disable Plugin
index 34229ca71dec452e03b67e3ceb95ae92da22cce6..4e53a42ad663bf6ff46f004ff7997227359b1299 100644 (file)
@@ -2,10 +2,9 @@
 title: proxy-control
 keywords:
   - APISIX
-  - Plugin
+  - API Gateway
   - Proxy Control
-  - proxy-control
-description: This document contains information about the Apache APISIX proxy-control Plugin.
+description: This document contains information about the Apache APISIX proxy-control Plugin, you can use it to control the behavior of the NGINX proxy dynamically.
 ---
 
 <!--
@@ -29,11 +28,11 @@ description: This document contains information about the Apache APISIX proxy-co
 
 ## Description
 
-The proxy-control Plugin dynamically controls the behavior of the Nginx proxy.
+The proxy-control Plugin dynamically controls the behavior of the NGINX proxy.
 
 :::info IMPORTANT
 
-This Plugin requires APISIX to run on APISIX-Base. See [apisix-build-tools](https://github.com/api7/apisix-build-tools) for more info.
+This Plugin requires APISIX to run on [APISIX-Base](../FAQ.md#how-do-i-build-the-apisix-base-environment). See [apisix-build-tools](https://github.com/api7/apisix-build-tools) for more info.
 
 :::
 
@@ -41,14 +40,15 @@ This Plugin requires APISIX to run on APISIX-Base. See [apisix-build-tools](http
 
 | Name              | Type    | Required | Default | Description                                                                                                                                                                 |
 | ----------------- | ------- | -------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| request_buffering | boolean | False    | true    | When set to `true`, the Plugin dynamically sets the [proxy_request_buffering](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering) directive. |
+| request_buffering | boolean | False    | true    | When set to `true`, the Plugin dynamically sets the [`proxy_request_buffering`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering) directive. |
 
 ## Enabling the Plugin
 
 The example below enables the Plugin on a specific Route:
 
 ```shell
-curl -i http://127.0.0.1:9080/apisix/admin/routes/1  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl -i http://127.0.0.1:9080/apisix/admin/routes/1 \
+  -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
 {
     "uri": "/upload",
     "plugins": {
@@ -80,7