Configure Linux host-network deployment

This commit is contained in:
2026-03-04 19:37:09 +08:00
parent 663999f173
commit feb99faaf3
4 changed files with 216 additions and 188 deletions

View File

@@ -5,7 +5,7 @@ PG_DSN=postgresql+asyncpg://sentinel:password@postgres:5432/sentinel
SENTINEL_HMAC_SECRET=replace-with-a-random-32-byte-secret SENTINEL_HMAC_SECRET=replace-with-a-random-32-byte-secret
ADMIN_PASSWORD=replace-with-a-strong-password ADMIN_PASSWORD=replace-with-a-strong-password
ADMIN_JWT_SECRET=replace-with-a-random-jwt-secret ADMIN_JWT_SECRET=replace-with-a-random-jwt-secret
TRUSTED_PROXY_IPS=172.18.0.0/16 TRUSTED_PROXY_IPS=172.30.0.0/24
SENTINEL_FAILSAFE_MODE=closed SENTINEL_FAILSAFE_MODE=closed
APP_PORT=7000 APP_PORT=7000
ALERT_WEBHOOK_URL= ALERT_WEBHOOK_URL=

379
README.md
View File

@@ -1,18 +1,18 @@
# Key-IP Sentinel # Key-IP Sentinel
Key-IP Sentinel is a FastAPI-based reverse proxy that enforces first-use IP binding for model API keys before traffic reaches a downstream New API service. Key-IP Sentinel 是一个基于 FastAPI 的反向代理,用于在请求到达下游 New API 服务之前,对模型 API Key 执行“首次使用绑定来源 IP”的控制。
## Features ## 功能特性
- First-use bind with HMAC-SHA256 token hashing, Redis cache-aside, and PostgreSQL CIDR matching. - 首次使用自动绑定,使用 HMAC-SHA256 token 做哈希,结合 Redis cache-aside PostgreSQL 存储绑定规则。
- Streaming reverse proxy built on `httpx.AsyncClient` and FastAPI `StreamingResponse`. - 基于 `httpx.AsyncClient` FastAPI `StreamingResponse` 的流式反向代理,支持流式响应透传。
- Trusted proxy IP extraction that only accepts `X-Real-IP` from configured upstream networks. - 可信代理 IP 提取逻辑,只接受来自指定上游网络的 `X-Real-IP`
- Redis-backed intercept alert counters with webhook delivery and PostgreSQL audit logs. - 基于 Redis 的拦截计数、Webhook 告警,以及 PostgreSQL 审计日志。
- Admin API protected by JWT and Redis-backed login lockout. - 管理后台登录使用 JWT并带有 Redis 登录失败锁定机制。
- Vue 3 + Element Plus admin console for dashboarding, binding operations, audit logs, and live runtime settings. - 使用 Vue 3 + Element Plus 的管理后台,可查看看板、绑定、审计日志和运行时设置。
- Docker Compose deployment with Nginx, app, Redis, and PostgreSQL. - 支持 Docker Compose 部署,包含 Nginx、应用、Redis 和 PostgreSQL
## Repository Layout ## 仓库结构
```text ```text
sentinel/ sentinel/
@@ -26,83 +26,103 @@ sentinel/
└── README.md └── README.md
``` ```
## Runtime Notes ## 运行说明
- Redis stores binding cache, alert counters, daily dashboard metrics, and mutable runtime settings. - Redis 用于存储绑定缓存、告警计数、每日看板指标和可变运行时设置。
- PostgreSQL stores authoritative token bindings and intercept logs. - PostgreSQL 用于存储权威绑定记录和拦截日志。
- Archive retention removes inactive bindings from the active table after `ARCHIVE_DAYS`. A later request from the same token will bind again on first use. - 归档保留机制会在绑定超过 `ARCHIVE_DAYS` 不活跃后,从活动表中移除;同一 token 后续再次请求时会重新进行首次绑定。
- `SENTINEL_FAILSAFE_MODE=closed` rejects requests when both Redis and PostgreSQL are unavailable. `open` allows traffic through. - `SENTINEL_FAILSAFE_MODE=closed` 表示在 Redis PostgreSQL 同时不可用时拒绝请求;`open` 表示放行。
- Binding rules support `single` (single IP or single CIDR), `multiple` (multiple discrete IPs), and `all` (allow all source IPs). - 绑定规则支持三种模式:`single`(单个 IP 或单个 CIDR)、`multiple`(多个离散 IP`all`(允许全部来源 IP
## Sentinel and New API Relationship ## Sentinel New API 的关系
Sentinel and New API are expected to run as **two separate Docker Compose projects**: Sentinel New API 预期是以两套独立的 Docker Compose 项目部署:
- The **Sentinel compose** contains `nginx`, `sentinel-app`, `redis`, and `postgres`. - Sentinel 这套 compose 包含 `nginx``sentinel-app``redis``postgres`
- The **New API compose** contains your existing New API service and its own dependencies. - New API 那套 compose 包含你现有的 New API 服务及其自身依赖
- The two stacks communicate through a **shared external Docker network**. - 两套服务通过一个共享的外部 Docker 网络通信
Traffic flow: 流量链路如下:
```text ```text
Client / SDK 客户端 / SDK
| |
| request to Sentinel public endpoint | 请求发往 Sentinel 对外入口
v v
Sentinel nginx -> sentinel-app -> New API service -> model backend Sentinel nginx -> sentinel-app -> New API 服务 -> 模型后端
| |
+-> redis / postgres +-> redis / postgres
``` ```
The key point is: **clients should call Sentinel, not call New API directly**, otherwise IP binding will not take effect. 最关键的一点是:客户端必须请求 Sentinel而不是直接请求 New API否则 IP 绑定不会生效。
## Recommended Deployment Topology ## Linux 上获取真实客户端 IP
Use one external network name for both compose projects. This repository currently uses: 如果你希望在 Linux 部署机上记录真实的局域网客户端 IP不要再通过 Docker bridge 的 `3000:80` 这种端口发布方式暴露公网入口。
推荐生产拓扑如下:
- `nginx` 使用 `network_mode: host`
- `nginx` 直接监听宿主机 `3000` 端口
- `sentinel-app` 保留在内部 bridge 网络,并使用固定 IP
- `sentinel-app` 同时加入 `shared_network`,用于访问 New API
- `new-api` 保持内部可达,不再直接暴露给客户端
这样设计的原因:
- Docker `ports:` 发布端口时,客户端入口这一跳通常会经过 NAT
- 这会导致容器内看到的是类似 `172.28.x.x` 的桥接地址,而不是真实客户端 IP
- `shared_network` 只负责 Sentinel 和 New API 之间的内部通信,不决定客户端入口进来的源地址
`nginx` 使用 `network_mode: host` 时,它直接接收宿主机上的真实入站连接,因此可以把真实来源 IP 通过 `X-Real-IP` 转发给 `sentinel-app`
## 推荐部署拓扑
两套 compose 使用同一个外部网络名。当前仓库约定如下:
```text ```text
shared_network shared_network
``` ```
In the Sentinel compose: Sentinel 这套 compose 中:
- `sentinel-app` joins `shared_network` - `sentinel-app` 加入 `shared_network`
- `nginx` exposes the public entrypoint - `nginx` 通过 Linux 宿主机网络暴露外部入口
- `DOWNSTREAM_URL` points to the **New API service name on that shared network** - `DOWNSTREAM_URL` 指向 `shared_network` 上 New API 的服务名
In the New API compose: New API 那套 compose 中:
- The New API container must also join `shared_network` - New API 容器也必须加入 `shared_network`
- The New API service name must match what Sentinel uses in `DOWNSTREAM_URL` - New API 的服务名必须与 Sentinel `DOWNSTREAM_URL` 的主机名一致
Example: 例如:
- New API compose service name: `new-api` - New API compose 服务名:`new-api`
- New API internal container port: `3000` - New API 容器内部监听端口:`3000`
- Sentinel `.env`: `DOWNSTREAM_URL=http://new-api:3000` - Sentinel `.env``DOWNSTREAM_URL=http://new-api:3000`
If your New API service is named differently, change `DOWNSTREAM_URL` accordingly, for example: 如果你的 New API 服务名不同,就相应修改 `DOWNSTREAM_URL`,例如:
```text ```text
DOWNSTREAM_URL=http://my-newapi:3000 DOWNSTREAM_URL=http://my-newapi:3000
``` ```
## Common New API Connection Patterns ## New API 的两种常见接入方式
In practice, you may run New API in either of these two ways. 实际部署中New API 通常有两种接法。
### Pattern A: Production machine, New API in its own compose ### 方式 A生产机上New API 运行在独立 compose
This is the recommended production arrangement. 这是推荐的生产方案。
New API keeps its own compose project and typically joins: New API 继续使用它自己的 compose 项目,并通常同时加入:
- `default` - `default`
- `shared_network` - `shared_network`
That means New API can continue to use its own internal compose network for its own dependencies, while also exposing its service name to Sentinel through `shared_network`. 这样一来New API 既可以继续使用它自己的内部网络访问自身依赖,又可以通过 `shared_network` 把服务名暴露给 Sentinel。
Example New API compose fragment: 示例 New API compose 片段:
```yaml ```yaml
services: services:
@@ -117,17 +137,17 @@ networks:
external: true external: true
``` ```
With this setup, Sentinel still uses: 在这种情况下Sentinel 依旧使用:
```text ```text
DOWNSTREAM_URL=http://new-api:3000 DOWNSTREAM_URL=http://new-api:3000
``` ```
### Pattern B: Test machine, New API started as a standalone container ### 方式 B测试机上New API 直接通过 `docker run` 启动
On a test machine, you may not use a second compose project at all. Instead, you can start a standalone New API container with `docker run`, as long as that container also joins `shared_network`. 在测试机上,你不一定会使用第二套 compose也可以直接用 `docker run` 启动一个独立的 New API 容器,只要它加入 `shared_network` 即可。
Example: 示例:
```bash ```bash
docker run -d \ docker run -d \
@@ -136,101 +156,101 @@ docker run -d \
your-new-api-image your-new-api-image
``` ```
Important: 注意:
- The container name or reachable hostname must match what Sentinel uses in `DOWNSTREAM_URL`. - 容器名或可解析主机名必须与 Sentinel `DOWNSTREAM_URL` 的主机名一致
- If the container is not named `new-api`, then adjust `.env` accordingly. - 如果容器名不是 `new-api`,就要同步修改 `.env`
- The port in `DOWNSTREAM_URL` is still the New API container's internal listening port. - `DOWNSTREAM_URL` 里的端口仍然应当写容器内部监听端口
Example: 例如:
```text ```text
DOWNSTREAM_URL=http://new-api:3000 DOWNSTREAM_URL=http://new-api:3000
``` ```
or, if your standalone container is named differently: 如果容器名不同:
```text ```text
DOWNSTREAM_URL=http://new-api-test:3000 DOWNSTREAM_URL=http://new-api-test:3000
``` ```
## Local Development ## 本地开发
### Backend ### 后端
1. Install `uv` and ensure Python 3.13 is available. 1. 安装 `uv`,并确保本机具备 Python 3.13
2. Create the environment and sync dependencies: 2. 创建虚拟环境并同步依赖:
```bash ```bash
uv sync uv sync
``` ```
3. Copy `.env.example` to `.env` and update secrets plus addresses. 3. `.env.example` 复制为 `.env`,并填写密钥与连接地址
4. Start PostgreSQL and Redis. 4. 启动 PostgreSQL Redis
5. Run the API: 5. 启动 API
```bash ```bash
uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 7000 uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 7000
``` ```
### Frontend ### 前端
1. Install dependencies: 1. 安装依赖:
```bash ```bash
cd frontend cd frontend
npm install npm install
``` ```
2. Start Vite dev server: 2. 启动 Vite 开发服务器:
```bash ```bash
npm run dev npm run dev
``` ```
The Vite config proxies `/admin/api/*` to `http://127.0.0.1:7000`. Vite 开发代理会把 `/admin/api/*` 转发到 `http://127.0.0.1:7000`
If you prefer the repository root entrypoint, `uv run main.py` now starts the same FastAPI app on `APP_PORT` (default `7000`). 如果你更习惯从仓库根目录启动,也可以直接执行 `uv run main.py`,它会以 `APP_PORT`(默认 `7000`)启动同一个 FastAPI 应用。
## Dependency Management ## 依赖管理
- Local Python development uses `uv` via [`pyproject.toml`](/d:/project/sentinel/pyproject.toml). - 本地 Python 开发依赖通过 [`pyproject.toml`](/c:/project/sentinel/pyproject.toml)`uv` 管理
- The container runtime image uses [`requirements.txt`](/d:/project/sentinel/requirements.txt) and intentionally installs only Python dependencies. - 容器运行镜像使用 [`requirements.txt`](/c:/project/sentinel/requirements.txt) 安装 Python 依赖
- Application source code is mounted by Compose at runtime, so the offline host does not need to rebuild the image just to load the current backend code. - 应用源码通过 Compose 在运行时挂载,因此离线机器不需要为后端代码改动频繁重建镜像
## Offline Deployment Model ## 离线部署模型
If your production machine has no internet access, the current repository should be used in this way: 如果你的生产机器无法访问外网,建议按下面的方式使用本仓库:
1. Build the `key-ip-sentinel:latest` image on a machine with internet access. 1. 在有网机器上构建 `key-ip-sentinel:latest` 镜像
2. Export that image as a tar archive. 2. 将镜像导出为 tar 包
3. Import the archive on the offline machine. 3. 在离线机器上导入镜像
4. Place the repository files on the offline machine. 4. 将仓库文件整体复制到离线机器
5. Start the stack with `docker compose up -d`, not `docker compose up --build -d`. 5. 使用 `docker compose up -d` 启动,而不是 `docker compose up --build -d`
This works because: 这套方式之所以可行,是因为:
- `Dockerfile` installs only Python dependencies into the image. - `Dockerfile` 只负责安装 Python 依赖
- `docker-compose.yml` mounts `./app` into the running `sentinel-app` container. - `docker-compose.yml` 会在运行时挂载 `./app`
- The offline machine only needs the prebuilt image plus the repository files. - 离线机器只需要预构建镜像和仓库文件即可运行
Important limitation: 需要注意的限制:
- If you change Python dependencies in `requirements.txt`, you must rebuild and re-export the image on a connected machine. - 如果你修改了 `requirements.txt`,必须回到有网机器重新构建并导出镜像
- If you only change backend application code under `app/`, you do not need to rebuild the image; restarting the container is enough. - 如果你只修改了 `app/` 下的后端代码,通常不需要重建镜像,重启容器即可
- `frontend/dist` must already exist before deployment, because Nginx serves the built frontend directly from the repository. - `frontend/dist` 必须提前构建好,因为 Nginx 会直接从仓库目录提供前端静态文件
- The base images used by this stack, such as `nginx:alpine`, `redis:7-alpine`, and `postgres:16`, must also be available on the offline host in advance. - `nginx:alpine``redis:7-alpine``postgres:16` 这些公共镜像,在离线机器上也必须事先准备好
### Prepare images on a connected machine ### 在有网机器上准备镜像
Build and export the Sentinel runtime image: 构建并导出 Sentinel 运行镜像:
```bash ```bash
docker build -t key-ip-sentinel:latest . docker build -t key-ip-sentinel:latest .
docker save -o key-ip-sentinel-latest.tar key-ip-sentinel:latest docker save -o key-ip-sentinel-latest.tar key-ip-sentinel:latest
``` ```
Also export the public images used by Compose if the offline machine cannot pull them: 如果离线机器无法拉取公共镜像,也需要一并导出:
```bash ```bash
docker pull nginx:alpine docker pull nginx:alpine
@@ -240,7 +260,7 @@ docker pull postgres:16
docker save -o sentinel-support-images.tar nginx:alpine redis:7-alpine postgres:16 docker save -o sentinel-support-images.tar nginx:alpine redis:7-alpine postgres:16
``` ```
If the admin frontend is not already built, build it on the connected machine too: 如果管理后台前端尚未构建,也请在有网机器上提前构建:
```bash ```bash
cd frontend cd frontend
@@ -249,44 +269,44 @@ npm run build
cd .. cd ..
``` ```
Then copy these items to the offline machine: 然后把以下内容复制到离线机器:
- the full repository working tree - 整个仓库工作目录
- `key-ip-sentinel-latest.tar` - `key-ip-sentinel-latest.tar`
- `sentinel-support-images.tar` if needed - `sentinel-support-images.tar`(如果需要)
### Import images on the offline machine ### 在离线机器上导入镜像
```bash ```bash
docker load -i key-ip-sentinel-latest.tar docker load -i key-ip-sentinel-latest.tar
docker load -i sentinel-support-images.tar docker load -i sentinel-support-images.tar
``` ```
### Start on the offline machine ### 在离线机器上启动
After `.env`, `frontend/dist`, and `shared_network` are ready: `.env``frontend/dist` `shared_network` 都准备好之后,执行:
```bash ```bash
docker compose up -d docker compose up -d
``` ```
## Production Deployment ## 生产部署
### 1. Create the shared Docker network ### 1. 创建共享 Docker 网络
Create the external network once on the Docker host: 在 Docker 主机上先创建一次外部网络:
```bash ```bash
docker network create shared_network docker network create shared_network
``` ```
Both compose projects must reference this exact same external network name. 两套 compose 都必须引用这个完全相同的外部网络名。
### 2. Make sure New API joins the shared network ### 2. 确保 New API 加入共享网络
In the **New API** project, add the external network to the New API service. 在 New API 项目中,为 New API 服务加入这个外部网络。
Minimal example: 最小示例:
```yaml ```yaml
services: services:
@@ -301,22 +321,22 @@ networks:
external: true external: true
``` ```
Important: 重要说明:
- `new-api` here is the **service name** that Sentinel will resolve on the shared network. - 这里的 `new-api` 是 Sentinel 在共享网络中解析到的服务名
- The port in `DOWNSTREAM_URL` must be the **container internal port**, not the host published port. - `DOWNSTREAM_URL` 中的端口必须写容器内部监听端口,而不是宿主机映射端口
- If New API already listens on `3000` inside the container, use `http://new-api:3000`. - 如果 New API 容器内部监听 `3000`,就写 `http://new-api:3000`
- On a production host, New API can keep both `default` and `shared_network` at the same time. - 在生产机上New API 可以同时加入 `default` `shared_network`
- On a test host, you can skip a second compose project and use `docker run`, but the container must still join `shared_network`. - 在测试机上,也可以不使用第二套 compose而改用 `docker run`,但容器仍然必须加入 `shared_network`
### 3. Prepare Sentinel environment ### 3. 准备 Sentinel 环境变量
1. Copy `.env.example` to `.env`. 1. `.env.example` 复制为 `.env`
2. Replace `SENTINEL_HMAC_SECRET`, `ADMIN_PASSWORD`, and `ADMIN_JWT_SECRET`. 2. 替换 `SENTINEL_HMAC_SECRET``ADMIN_PASSWORD``ADMIN_JWT_SECRET`
3. Verify `DOWNSTREAM_URL` points to the New API **service name on `shared_network`**. 3. 确认 `DOWNSTREAM_URL` 指向 `shared_network` 上的 New API 服务名
4. Keep `PG_DSN` aligned with the fixed PostgreSQL container password in `docker-compose.yml`, or update both together. 4. 确认 `PG_DSN` `docker-compose.yml` 中 PostgreSQL 密码保持一致,如有修改需同时调整
Example `.env` for Sentinel: Sentinel 的 `.env` 示例:
```text ```text
DOWNSTREAM_URL=http://new-api:3000 DOWNSTREAM_URL=http://new-api:3000
@@ -326,7 +346,7 @@ PG_DSN=postgresql+asyncpg://sentinel:password@postgres:5432/sentinel
SENTINEL_HMAC_SECRET=replace-with-a-random-32-byte-secret SENTINEL_HMAC_SECRET=replace-with-a-random-32-byte-secret
ADMIN_PASSWORD=replace-with-a-strong-password ADMIN_PASSWORD=replace-with-a-strong-password
ADMIN_JWT_SECRET=replace-with-a-random-jwt-secret ADMIN_JWT_SECRET=replace-with-a-random-jwt-secret
TRUSTED_PROXY_IPS=172.24.0.0/16 TRUSTED_PROXY_IPS=172.30.0.0/24
SENTINEL_FAILSAFE_MODE=closed SENTINEL_FAILSAFE_MODE=closed
APP_PORT=7000 APP_PORT=7000
ALERT_WEBHOOK_URL= ALERT_WEBHOOK_URL=
@@ -335,12 +355,13 @@ ALERT_THRESHOLD_SECONDS=300
ARCHIVE_DAYS=90 ARCHIVE_DAYS=90
``` ```
Notes: 说明:
- `TRUSTED_PROXY_IPS` should match the Docker subnet used by the Sentinel internal network. - `TRUSTED_PROXY_IPS` 应与 Sentinel 内部 bridge 网络的网段一致,用来信任 `nginx` 这一跳代理
- If Docker recreates the compose network with a different subnet, update this value. - 如果 Docker 重新创建网络并导致网段变化,就需要同步修改
- 当前仓库中的生产 compose 已固定 `sentinel-net=172.30.0.0/24`,因此默认应写 `TRUSTED_PROXY_IPS=172.30.0.0/24`
### 4. Build the Sentinel frontend bundle ### 4. 构建 Sentinel 前端产物
```bash ```bash
cd frontend cd frontend
@@ -349,79 +370,81 @@ npm run build
cd .. cd ..
``` ```
This produces `frontend/dist`, which Nginx serves at `/admin/ui/`. 构建完成后会生成 `frontend/dist`Nginx 会将其作为 `/admin/ui/` 的静态站点目录。
If the target host is offline, do this on a connected machine first and copy the resulting `frontend/dist` directory with the repository. 如果目标主机离线,请在有网机器上先完成这一步,并把 `frontend/dist` 一并复制过去。
### 5. Confirm Sentinel compose prerequisites ### 5. 确认 Sentinel compose 启动前提
- Build the frontend first. If `frontend/dist` is missing, `/admin/ui/` cannot be served by Nginx. - 必须先构建前端;如果缺少 `frontend/dist`,则无法访问 `/admin/ui/`
- Ensure the external Docker network `shared_network` already exists before starting Sentinel. - 必须提前创建外部网络 `shared_network`
- Ensure `key-ip-sentinel:latest`, `nginx:alpine`, `redis:7-alpine`, and `postgres:16` are already present on the host if the host cannot access the internet. - 如果主机无法联网,必须事先准备好 `key-ip-sentinel:latest``nginx:alpine``redis:7-alpine``postgres:16`
- 当前这份生产 compose 假定宿主机是 Linux因为对外入口使用了 `network_mode: host`
### 6. Start the Sentinel stack ### 6. 启动 Sentinel 服务栈
```bash ```bash
docker compose up -d docker compose up -d
``` ```
Use `docker compose up --build -d` only on a connected machine where rebuilding the Sentinel image is actually intended. 只有在有网机器且你明确需要重建镜像时,才使用 `docker compose up --build -d`
Services: 服务入口如下:
- `http://<host>/` forwards model API traffic through Sentinel. - `http://<host>:3000/`:模型 API 请求通过 Sentinel 转发
- `http://<host>/admin/ui/` serves the admin console. - `http://<host>:3000/admin/ui/`:管理后台前端
- `http://<host>/admin/api/*` serves the admin API. - `http://<host>:3000/admin/api/*`:管理后台 API
- `http://<host>/health` exposes the app health check. - `http://<host>:3000/health`:健康检查
### 7. Verify cross-compose connectivity ### 7. 验证跨 compose 通信与真实 IP
After both compose stacks are running: 当两套服务都启动后:
1. Open `http://<host>:8016/health` and confirm it returns `{"status":"ok"}`. 1. 从另一台局域网机器访问 `http://<host>:3000/health`,确认返回 `{"status":"ok"}`
2. Open `http://<host>:8016/admin/ui/` and log in with `ADMIN_PASSWORD`. 2. 打开 `http://<host>:3000/admin/ui/`,使用 `ADMIN_PASSWORD` 登录
3. Send a real model API request to Sentinel, not to New API directly. 3. 向 Sentinel 发送一条真实模型请求,而不是直接访问 New API
4. Check the `Bindings` page and confirm the token appears with a recorded binding rule. 4. `Bindings` 页面确认 token 已出现并生成绑定规则
5. 确认记录下来的绑定 IP 是真实局域网客户端 IP而不是 Docker bridge 地址
Example test request: 示例测试请求:
```bash ```bash
curl http://<host>:8016/v1/models \ curl http://<host>:3000/v1/models \
-H "Authorization: Bearer <your_api_key>" -H "Authorization: Bearer <your_api_key>"
``` ```
If your client still points directly to New API, Sentinel will not see the request and no binding will be created. 如果客户端仍然直接请求 New APISentinel 就看不到流量,也不会生成绑定。
## Which Port Should Clients Use? ## 客户端应该连接哪个端口
With the current example compose in this repository: 按当前仓库中的 Linux 生产 compose
- Sentinel public port: `8016` - Sentinel 对外端口:`3000`
- New API internal container port: usually `3000` - New API 容器内部端口:通常是 `3000`
That means: 这意味着:
- **For testing now**, clients should call `http://<host>:8016/...` - 客户端应当请求 `http://<host>:3000/...`
- **Sentinel forwards internally** to `http://new-api:3000` - Sentinel 会在内部转发到 `http://new-api:3000`
Do **not** point clients at host port `3000` if that bypasses Sentinel. 不要把客户端直接指向 New API 的宿主机端口,否则会绕过 Sentinel
## How To Go Live Without Changing Client Config ## 如何做到业务无感上线
If you want existing clients to stay unchanged, Sentinel must take over the **original external entrypoint** that clients already use. 如果你希望现有客户端配置完全不改Sentinel 就必须接管原来客户端已经在使用的那个对外地址和端口。
Typical cutover strategy: 典型切换方式如下:
1. Keep New API on the shared internal Docker network. 1. 保留 New API 在内部共享网络中运行
2. Stop exposing New API directly to users. 2. 停止把 New API 直接暴露给最终用户
3. Expose Sentinel on the old public host/port instead. 3. Sentinel 暴露原来的对外地址和端口
4. Keep `DOWNSTREAM_URL` pointing to the internal New API service on `shared_network`. 4. `DOWNSTREAM_URL` 持续指向 `shared_network` 上的内部 New API 服务
For example, if users currently call `http://host:3000`, then in production you should eventually expose Sentinel on that old public port and make New API internal-only. 例如,如果现有客户端一直使用 `http://host:3000`,那生产切换时就应让 Sentinel 接管这个 `3000`,并让 New API 变成内部服务。
The current `8016:80` mapping in [`docker-compose.yml`](/d:/project/sentinel/docker-compose.yml) is a **local test mapping**, not the only valid production setup. 当前仓库中的 [`docker-compose.yml`](/c:/project/sentinel/docker-compose.yml) 已经按这种 Linux 生产方式调整Nginx 直接使用宿主机网络监听 `3000`,而 New API 保持内部访问。
## Admin API Summary ## 管理后台 API 概览
- `POST /admin/api/login` - `POST /admin/api/login`
- `GET /admin/api/dashboard` - `GET /admin/api/dashboard`
@@ -435,20 +458,24 @@ The current `8016:80` mapping in [`docker-compose.yml`](/d:/project/sentinel/doc
- `GET /admin/api/settings` - `GET /admin/api/settings`
- `PUT /admin/api/settings` - `PUT /admin/api/settings`
All admin endpoints except `/admin/api/login` require `Authorization: Bearer <jwt>`. `/admin/api/login` 外,所有管理接口都需要:
## Key Implementation Details ```text
Authorization: Bearer <jwt>
```
- `app/proxy/handler.py` keeps the downstream response fully streamed, including SSE responses. ## 关键实现细节
- `app/core/ip_utils.py` never trusts client-supplied `X-Forwarded-For`.
- `app/services/binding_service.py` batches `last_used_at` updates every 5 seconds through an `asyncio.Queue`.
- `app/services/alert_service.py` pushes webhooks once the Redis counter reaches the configured threshold.
- `app/services/archive_service.py` prunes stale bindings on a scheduler interval.
## Suggested Smoke Checks - `app/proxy/handler.py` 会完整流式透传下游响应,包括 SSE
- `app/core/ip_utils.py` 不信任客户端自己传来的 `X-Forwarded-For`
- `app/services/binding_service.py` 会通过 `asyncio.Queue` 每 5 秒批量刷新一次 `last_used_at`
- `app/services/alert_service.py` 会在 Redis 计数达到阈值后推送 Webhook
- `app/services/archive_service.py` 会定时归档过期绑定
1. `GET /health` returns `{"status":"ok"}`. ## 建议的冒烟检查
2. A first request with a new bearer token creates a binding in PostgreSQL and Redis.
3. A second request from the same IP is allowed and refreshes `last_used_at`. 1. `GET /health` 返回 `{"status":"ok"}`
4. A request from a different IP is rejected with `403` and creates an `intercept_logs` record, unless the binding rule is `all`. 2. 使用一个新的 Bearer Token 发起首次请求后,应在 PostgreSQL 和 Redis 中创建绑定
5. `/admin/api/login` returns a JWT and the frontend can load `/admin/api/dashboard`. 3. 同一 IP 的第二次请求应被放行,并刷新 `last_used_at`
4. 来自不同 IP 的请求应返回 `403`,并写入 `intercept_logs`,除非绑定规则是 `all`
5. `/admin/api/login` 应返回 JWT前端应能正常加载 `/admin/api/dashboard`

View File

@@ -2,16 +2,13 @@ services:
nginx: nginx:
image: nginx:alpine image: nginx:alpine
container_name: sentinel-nginx container_name: sentinel-nginx
network_mode: host
restart: unless-stopped restart: unless-stopped
ports:
- "8016:80"
depends_on: depends_on:
- sentinel-app - sentinel-app
volumes: volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./frontend/dist:/etc/nginx/html/admin/ui:ro - ./frontend/dist:/etc/nginx/html/admin/ui:ro
networks:
- sentinel-net
sentinel-app: sentinel-app:
image: key-ip-sentinel:latest image: key-ip-sentinel:latest
@@ -25,8 +22,9 @@ services:
- redis - redis
- postgres - postgres
networks: networks:
- sentinel-net sentinel-net:
- shared_network ipv4_address: 172.30.0.10
shared_network:
redis: redis:
image: redis:7-alpine image: redis:7-alpine
@@ -66,5 +64,8 @@ volumes:
networks: networks:
sentinel-net: sentinel-net:
driver: bridge driver: bridge
ipam:
config:
- subnet: 172.30.0.0/24
shared_network: shared_network:
external: true external: true

View File

@@ -17,12 +17,12 @@ http {
limit_req_zone $binary_remote_addr zone=api:10m rate=60r/m; limit_req_zone $binary_remote_addr zone=api:10m rate=60r/m;
upstream sentinel_app { upstream sentinel_app {
server sentinel-app:7000; server 172.30.0.10:7000;
keepalive 128; keepalive 128;
} }
server { server {
listen 80; listen 3000;
server_name _; server_name _;
client_max_body_size 32m; client_max_body_size 32m;
@@ -51,7 +51,7 @@ http {
proxy_pass http://sentinel_app; proxy_pass http://sentinel_app;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-Proto http;
proxy_set_header Connection ""; proxy_set_header Connection "";
} }
@@ -60,7 +60,7 @@ http {
proxy_pass http://sentinel_app/health; proxy_pass http://sentinel_app/health;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-Proto http;
} }
@@ -69,7 +69,7 @@ http {
proxy_pass http://sentinel_app; proxy_pass http://sentinel_app;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-Proto http;
proxy_set_header Connection ""; proxy_set_header Connection "";
} }