Add multi-IP binding modes and deployment guide

This commit is contained in:
2026-03-04 15:30:13 +08:00
parent 4348ee799b
commit eed1acd454
12 changed files with 509 additions and 217 deletions

174
README.md
View File

@@ -32,6 +32,60 @@ sentinel/
- PostgreSQL stores authoritative token bindings and intercept logs. - PostgreSQL stores authoritative token bindings and intercept logs.
- Archive retention removes inactive bindings from the active table after `ARCHIVE_DAYS`. A later request from the same token will bind again on first use. - Archive retention removes inactive bindings from the active table after `ARCHIVE_DAYS`. A later request from the same token will bind again on first use.
- `SENTINEL_FAILSAFE_MODE=closed` rejects requests when both Redis and PostgreSQL are unavailable. `open` allows traffic through. - `SENTINEL_FAILSAFE_MODE=closed` rejects requests when both Redis and PostgreSQL are unavailable. `open` allows traffic through.
- Binding rules support `single` (single IP or single CIDR), `multiple` (multiple discrete IPs), and `all` (allow all source IPs).
## Sentinel and New API Relationship
Sentinel and New API are expected to run as **two separate Docker Compose projects**:
- The **Sentinel compose** contains `nginx`, `sentinel-app`, `redis`, and `postgres`.
- The **New API compose** contains your existing New API service and its own dependencies.
- The two stacks communicate through a **shared external Docker network**.
Traffic flow:
```text
Client / SDK
|
| request to Sentinel public endpoint
v
Sentinel nginx -> sentinel-app -> New API service -> model backend
|
+-> redis / postgres
```
The key point is: **clients should call Sentinel, not call New API directly**, otherwise IP binding will not take effect.
## Recommended Deployment Topology
Use one external network name for both compose projects. This repository currently uses:
```text
shared_network
```
In the Sentinel compose:
- `sentinel-app` joins `shared_network`
- `nginx` exposes the public entrypoint
- `DOWNSTREAM_URL` points to the **New API service name on that shared network**
In the New API compose:
- The New API container must also join `shared_network`
- The New API service name must match what Sentinel uses in `DOWNSTREAM_URL`
Example:
- New API compose service name: `new-api`
- New API internal container port: `3000`
- Sentinel `.env`: `DOWNSTREAM_URL=http://new-api:3000`
If your New API service is named differently, change `DOWNSTREAM_URL` accordingly, for example:
```text
DOWNSTREAM_URL=http://my-newapi:3000
```
## Local Development ## Local Development
@@ -78,14 +132,73 @@ If you prefer the repository root entrypoint, `uv run main.py` now starts the sa
## Production Deployment ## Production Deployment
### 1. Prepare environment ### 1. Create the shared Docker network
Create the external network once on the Docker host:
```bash
docker network create shared_network
```
Both compose projects must reference this exact same external network name.
### 2. Make sure New API joins the shared network
In the **New API** project, add the external network to the New API service.
Minimal example:
```yaml
services:
new-api:
image: your-new-api-image
networks:
- default
- shared_network
networks:
shared_network:
external: true
```
Important:
- `new-api` here is the **service name** that Sentinel will resolve on the shared network.
- The port in `DOWNSTREAM_URL` must be the **container internal port**, not the host published port.
- If New API already listens on `3000` inside the container, use `http://new-api:3000`.
### 3. Prepare Sentinel environment
1. Copy `.env.example` to `.env`. 1. Copy `.env.example` to `.env`.
2. Replace `SENTINEL_HMAC_SECRET`, `ADMIN_PASSWORD`, and `ADMIN_JWT_SECRET`. 2. Replace `SENTINEL_HMAC_SECRET`, `ADMIN_PASSWORD`, and `ADMIN_JWT_SECRET`.
3. Verify `DOWNSTREAM_URL` points to the internal New API service. 3. Verify `DOWNSTREAM_URL` points to the New API **service name on `shared_network`**.
4. Keep `PG_DSN` aligned with the fixed PostgreSQL container password in `docker-compose.yml`, or update both together. 4. Keep `PG_DSN` aligned with the fixed PostgreSQL container password in `docker-compose.yml`, or update both together.
### 2. Build the frontend bundle Example `.env` for Sentinel:
```text
DOWNSTREAM_URL=http://new-api:3000
REDIS_ADDR=redis://redis:6379
REDIS_PASSWORD=
PG_DSN=postgresql+asyncpg://sentinel:password@postgres:5432/sentinel
SENTINEL_HMAC_SECRET=replace-with-a-random-32-byte-secret
ADMIN_PASSWORD=replace-with-a-strong-password
ADMIN_JWT_SECRET=replace-with-a-random-jwt-secret
TRUSTED_PROXY_IPS=172.24.0.0/16
SENTINEL_FAILSAFE_MODE=closed
APP_PORT=7000
ALERT_WEBHOOK_URL=
ALERT_THRESHOLD_COUNT=5
ALERT_THRESHOLD_SECONDS=300
ARCHIVE_DAYS=90
```
Notes:
- `TRUSTED_PROXY_IPS` should match the Docker subnet used by the Sentinel internal network.
- If Docker recreates the compose network with a different subnet, update this value.
### 4. Build the Sentinel frontend bundle
```bash ```bash
cd frontend cd frontend
@@ -96,12 +209,12 @@ cd ..
This produces `frontend/dist`, which Nginx serves at `/admin/ui/`. This produces `frontend/dist`, which Nginx serves at `/admin/ui/`.
### 3. Build prerequisites ### 5. Confirm Sentinel compose prerequisites
- Build the frontend first. If `frontend/dist` is missing, `/admin/ui/` cannot be served by Nginx. - Build the frontend first. If `frontend/dist` is missing, `/admin/ui/` cannot be served by Nginx.
- Ensure the external Docker network `llm-shared-net` already exists if `DOWNSTREAM_URL=http://new-api:3000` should resolve across stacks. - Ensure the external Docker network `shared_network` already exists before starting Sentinel.
### 4. Start the stack ### 6. Start the Sentinel stack
```bash ```bash
docker compose up --build -d docker compose up --build -d
@@ -114,6 +227,53 @@ Services:
- `http://<host>/admin/api/*` serves the admin API. - `http://<host>/admin/api/*` serves the admin API.
- `http://<host>/health` exposes the app health check. - `http://<host>/health` exposes the app health check.
### 7. Verify cross-compose connectivity
After both compose stacks are running:
1. Open `http://<host>:8016/health` and confirm it returns `{"status":"ok"}`.
2. Open `http://<host>:8016/admin/ui/` and log in with `ADMIN_PASSWORD`.
3. Send a real model API request to Sentinel, not to New API directly.
4. Check the `Bindings` page and confirm the token appears with a recorded binding rule.
Example test request:
```bash
curl http://<host>:8016/v1/models \
-H "Authorization: Bearer <your_api_key>"
```
If your client still points directly to New API, Sentinel will not see the request and no binding will be created.
## Which Port Should Clients Use?
With the current example compose in this repository:
- Sentinel public port: `8016`
- New API internal container port: usually `3000`
That means:
- **For testing now**, clients should call `http://<host>:8016/...`
- **Sentinel forwards internally** to `http://new-api:3000`
Do **not** point clients at host port `3000` if that bypasses Sentinel.
## How To Go Live Without Changing Client Config
If you want existing clients to stay unchanged, Sentinel must take over the **original external entrypoint** that clients already use.
Typical cutover strategy:
1. Keep New API on the shared internal Docker network.
2. Stop exposing New API directly to users.
3. Expose Sentinel on the old public host/port instead.
4. Keep `DOWNSTREAM_URL` pointing to the internal New API service on `shared_network`.
For example, if users currently call `http://host:3000`, then in production you should eventually expose Sentinel on that old public port and make New API internal-only.
The current `8016:80` mapping in [`docker-compose.yml`](/d:/project/sentinel/docker-compose.yml) is a **local test mapping**, not the only valid production setup.
## Admin API Summary ## Admin API Summary
- `POST /admin/api/login` - `POST /admin/api/login`
@@ -143,5 +303,5 @@ All admin endpoints except `/admin/api/login` require `Authorization: Bearer <jw
1. `GET /health` returns `{"status":"ok"}`. 1. `GET /health` returns `{"status":"ok"}`.
2. A first request with a new bearer token creates a binding in PostgreSQL and Redis. 2. A first request with a new bearer token creates a binding in PostgreSQL and Redis.
3. A second request from the same IP is allowed and refreshes `last_used_at`. 3. A second request from the same IP is allowed and refreshes `last_used_at`.
4. A request from a different IP is rejected with `403` and creates an `intercept_logs` record. 4. A request from a different IP is rejected with `403` and creates an `intercept_logs` record, unless the binding rule is `all`.
5. `/admin/api/login` returns a JWT and the frontend can load `/admin/api/dashboard`. 5. `/admin/api/login` returns a JWT and the frontend can load `/admin/api/dashboard`.

View File

@@ -28,6 +28,8 @@ def to_binding_item(binding: TokenBinding, binding_service: BindingService) -> B
id=binding.id, id=binding.id,
token_display=binding.token_display, token_display=binding.token_display,
bound_ip=str(binding.bound_ip), bound_ip=str(binding.bound_ip),
binding_mode=binding.binding_mode,
allowed_ips=[str(item) for item in binding.allowed_ips],
status=binding.status, status=binding.status,
status_label=binding_service.status_label(binding.status), status_label=binding_service.status_label(binding.status),
first_used_at=binding.first_used_at, first_used_at=binding.first_used_at,
@@ -70,7 +72,13 @@ def log_admin_action(request: Request, settings: Settings, action: str, binding_
async def commit_binding_cache(binding: TokenBinding, binding_service: BindingService) -> None: async def commit_binding_cache(binding: TokenBinding, binding_service: BindingService) -> None:
await binding_service.sync_binding_cache(binding.token_hash, str(binding.bound_ip), binding.status) await binding_service.sync_binding_cache(
binding.token_hash,
str(binding.bound_ip),
binding.binding_mode,
[str(item) for item in binding.allowed_ips],
binding.status,
)
async def update_binding_status( async def update_binding_status(
@@ -138,7 +146,9 @@ async def update_bound_ip(
binding_service: BindingService = Depends(get_binding_service), binding_service: BindingService = Depends(get_binding_service),
): ):
binding = await get_binding_or_404(session, payload.id) binding = await get_binding_or_404(session, payload.id)
binding.bound_ip = payload.bound_ip binding.binding_mode = payload.binding_mode
binding.allowed_ips = payload.allowed_ips
binding.bound_ip = binding_service.build_bound_ip_display(payload.binding_mode, payload.allowed_ips)
await session.commit() await session.commit()
await commit_binding_cache(binding, binding_service) await commit_binding_cache(binding, binding_service)
log_admin_action(request, settings, "update_ip", payload.id) log_admin_action(request, settings, "update_ip", payload.id)

View File

@@ -76,7 +76,7 @@ async def build_recent_intercepts(session: AsyncSession) -> list[InterceptLogIte
InterceptLogItem( InterceptLogItem(
id=item.id, id=item.id,
token_display=item.token_display, token_display=item.token_display,
bound_ip=str(item.bound_ip), bound_ip=item.bound_ip,
attempt_ip=str(item.attempt_ip), attempt_ip=str(item.attempt_ip),
alerted=item.alerted, alerted=item.alerted,
intercepted_at=item.intercepted_at, intercepted_at=item.intercepted_at,

View File

@@ -38,7 +38,7 @@ def to_log_item(item: InterceptLog) -> InterceptLogItem:
return InterceptLogItem( return InterceptLogItem(
id=item.id, id=item.id,
token_display=item.token_display, token_display=item.token_display,
bound_ip=str(item.bound_ip), bound_ip=item.bound_ip,
attempt_ip=str(item.attempt_ip), attempt_ip=str(item.attempt_ip),
alerted=item.alerted, alerted=item.alerted,
intercepted_at=item.intercepted_at, intercepted_at=item.intercepted_at,
@@ -47,13 +47,13 @@ def to_log_item(item: InterceptLog) -> InterceptLogItem:
def write_log_csv(buffer: io.StringIO, logs: list[InterceptLog]) -> None: def write_log_csv(buffer: io.StringIO, logs: list[InterceptLog]) -> None:
writer = csv.writer(buffer) writer = csv.writer(buffer)
writer.writerow(["id", "token_display", "bound_ip", "attempt_ip", "alerted", "intercepted_at"]) writer.writerow(["id", "token_display", "binding_rule", "attempt_ip", "alerted", "intercepted_at"])
for item in logs: for item in logs:
writer.writerow( writer.writerow(
[ [
item.id, item.id,
item.token_display, item.token_display,
str(item.bound_ip), item.bound_ip,
str(item.attempt_ip), str(item.attempt_ip),
item.alerted, item.alerted,
item.intercepted_at.isoformat(), item.intercepted_at.isoformat(),

View File

@@ -14,7 +14,7 @@ from redis.asyncio import from_url as redis_from_url
from app.api import auth, bindings, dashboard, logs, settings as settings_api from app.api import auth, bindings, dashboard, logs, settings as settings_api
from app.config import RUNTIME_SETTINGS_REDIS_KEY, RuntimeSettings, Settings, get_settings from app.config import RUNTIME_SETTINGS_REDIS_KEY, RuntimeSettings, Settings, get_settings
from app.models import intercept_log, token_binding # noqa: F401 from app.models import intercept_log, token_binding # noqa: F401
from app.models.db import close_db, get_session_factory, init_db from app.models.db import close_db, ensure_schema_compatibility, get_session_factory, init_db
from app.proxy.handler import router as proxy_router from app.proxy.handler import router as proxy_router
from app.services.alert_service import AlertService from app.services.alert_service import AlertService
from app.services.archive_service import ArchiveService from app.services.archive_service import ArchiveService
@@ -100,6 +100,7 @@ async def load_runtime_settings(redis: Redis | None, settings: Settings) -> Runt
async def lifespan(app: FastAPI): async def lifespan(app: FastAPI):
settings = get_settings() settings = get_settings()
init_db(settings) init_db(settings)
await ensure_schema_compatibility()
session_factory = get_session_factory() session_factory = get_session_factory()
redis: Redis | None = redis_from_url( redis: Redis | None = redis_from_url(

View File

@@ -1,5 +1,6 @@
from __future__ import annotations from __future__ import annotations
from sqlalchemy import text
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine
from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import DeclarativeBase
@@ -40,6 +41,31 @@ def get_session_factory() -> async_sessionmaker[AsyncSession]:
return _session_factory return _session_factory
async def ensure_schema_compatibility() -> None:
engine = get_engine()
statements = [
"DROP INDEX IF EXISTS idx_token_bindings_ip",
"ALTER TABLE token_bindings ALTER COLUMN bound_ip TYPE TEXT USING bound_ip::text",
"ALTER TABLE intercept_logs ALTER COLUMN bound_ip TYPE TEXT USING bound_ip::text",
"ALTER TABLE token_bindings ADD COLUMN IF NOT EXISTS binding_mode VARCHAR(16) DEFAULT 'single'",
"ALTER TABLE token_bindings ADD COLUMN IF NOT EXISTS allowed_ips JSONB DEFAULT '[]'::jsonb",
"UPDATE token_bindings SET binding_mode = 'single' WHERE binding_mode IS NULL OR binding_mode = ''",
"""
UPDATE token_bindings
SET allowed_ips = jsonb_build_array(bound_ip)
WHERE allowed_ips IS NULL OR allowed_ips = '[]'::jsonb
""",
"ALTER TABLE token_bindings ALTER COLUMN binding_mode SET NOT NULL",
"ALTER TABLE token_bindings ALTER COLUMN allowed_ips SET NOT NULL",
"ALTER TABLE token_bindings ALTER COLUMN binding_mode SET DEFAULT 'single'",
"ALTER TABLE token_bindings ALTER COLUMN allowed_ips SET DEFAULT '[]'::jsonb",
"CREATE INDEX IF NOT EXISTS idx_token_bindings_ip ON token_bindings(bound_ip)",
]
async with engine.begin() as connection:
for statement in statements:
await connection.execute(text(statement))
async def close_db() -> None: async def close_db() -> None:
global _engine, _session_factory global _engine, _session_factory
if _engine is not None: if _engine is not None:

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
from datetime import datetime from datetime import datetime
from sqlalchemy import Boolean, DateTime, Index, String, func, text from sqlalchemy import Boolean, DateTime, Index, String, Text, func, text
from sqlalchemy.dialects.postgresql import CIDR, INET from sqlalchemy.dialects.postgresql import INET
from sqlalchemy.orm import Mapped, mapped_column from sqlalchemy.orm import Mapped, mapped_column
from app.models.db import Base from app.models.db import Base
@@ -19,7 +19,7 @@ class InterceptLog(Base):
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
token_hash: Mapped[str] = mapped_column(String(64), nullable=False) token_hash: Mapped[str] = mapped_column(String(64), nullable=False)
token_display: Mapped[str] = mapped_column(String(20), nullable=False) token_display: Mapped[str] = mapped_column(String(20), nullable=False)
bound_ip: Mapped[str] = mapped_column(CIDR, nullable=False) bound_ip: Mapped[str] = mapped_column(Text, nullable=False)
attempt_ip: Mapped[str] = mapped_column(INET, nullable=False) attempt_ip: Mapped[str] = mapped_column(INET, nullable=False)
alerted: Mapped[bool] = mapped_column(Boolean, nullable=False, default=False, server_default=text("FALSE")) alerted: Mapped[bool] = mapped_column(Boolean, nullable=False, default=False, server_default=text("FALSE"))
intercepted_at: Mapped[datetime] = mapped_column( intercepted_at: Mapped[datetime] = mapped_column(

View File

@@ -2,27 +2,42 @@ from __future__ import annotations
from datetime import datetime from datetime import datetime
from sqlalchemy import DateTime, Index, SmallInteger, String, func, text from sqlalchemy import DateTime, Index, SmallInteger, String, Text, func, text
from sqlalchemy.dialects.postgresql import CIDR from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.orm import Mapped, mapped_column from sqlalchemy.orm import Mapped, mapped_column
from app.models.db import Base from app.models.db import Base
STATUS_ACTIVE = 1 STATUS_ACTIVE = 1
STATUS_BANNED = 2 STATUS_BANNED = 2
BINDING_MODE_SINGLE = "single"
BINDING_MODE_MULTIPLE = "multiple"
BINDING_MODE_ALL = "all"
class TokenBinding(Base): class TokenBinding(Base):
__tablename__ = "token_bindings" __tablename__ = "token_bindings"
__table_args__ = ( __table_args__ = (
Index("idx_token_bindings_hash", "token_hash"), Index("idx_token_bindings_hash", "token_hash"),
Index("idx_token_bindings_ip", "bound_ip", postgresql_using="gist", postgresql_ops={"bound_ip": "inet_ops"}), Index("idx_token_bindings_ip", "bound_ip"),
) )
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
token_hash: Mapped[str] = mapped_column(String(64), unique=True, nullable=False) token_hash: Mapped[str] = mapped_column(String(64), unique=True, nullable=False)
token_display: Mapped[str] = mapped_column(String(20), nullable=False) token_display: Mapped[str] = mapped_column(String(20), nullable=False)
bound_ip: Mapped[str] = mapped_column(CIDR, nullable=False) bound_ip: Mapped[str] = mapped_column(Text, nullable=False)
binding_mode: Mapped[str] = mapped_column(
String(16),
nullable=False,
default=BINDING_MODE_SINGLE,
server_default=text("'single'"),
)
allowed_ips: Mapped[list[str]] = mapped_column(
JSONB,
nullable=False,
default=list,
server_default=text("'[]'::jsonb"),
)
status: Mapped[int] = mapped_column( status: Mapped[int] = mapped_column(
SmallInteger, SmallInteger,
nullable=False, nullable=False,

View File

@@ -1,8 +1,11 @@
from __future__ import annotations from __future__ import annotations
from datetime import datetime from datetime import datetime
from ipaddress import ip_address, ip_network
from pydantic import BaseModel, ConfigDict, Field, field_validator from pydantic import BaseModel, ConfigDict, Field, model_validator
from app.models.token_binding import BINDING_MODE_ALL, BINDING_MODE_MULTIPLE, BINDING_MODE_SINGLE
class BindingItem(BaseModel): class BindingItem(BaseModel):
@@ -11,6 +14,8 @@ class BindingItem(BaseModel):
id: int id: int
token_display: str token_display: str
bound_ip: str bound_ip: str
binding_mode: str
allowed_ips: list[str]
status: int status: int
status_label: str status_label: str
first_used_at: datetime first_used_at: datetime
@@ -31,12 +36,32 @@ class BindingActionRequest(BaseModel):
class BindingIPUpdateRequest(BaseModel): class BindingIPUpdateRequest(BaseModel):
id: int = Field(gt=0) id: int = Field(gt=0)
bound_ip: str = Field(min_length=3, max_length=64) binding_mode: str = Field(default=BINDING_MODE_SINGLE)
allowed_ips: list[str] = Field(default_factory=list)
@field_validator("bound_ip") @model_validator(mode="after")
@classmethod def validate_binding_rule(self):
def validate_bound_ip(cls, value: str) -> str: allowed_ips = [item.strip() for item in self.allowed_ips if item.strip()]
from ipaddress import ip_network
ip_network(value, strict=False) if self.binding_mode == BINDING_MODE_ALL:
return value self.allowed_ips = []
return self
if self.binding_mode == BINDING_MODE_SINGLE:
if len(allowed_ips) != 1:
raise ValueError("Single binding mode requires exactly one IP or CIDR.")
ip_network(allowed_ips[0], strict=False)
self.allowed_ips = allowed_ips
return self
if self.binding_mode == BINDING_MODE_MULTIPLE:
if not allowed_ips:
raise ValueError("Multiple binding mode requires at least one IP.")
normalized: list[str] = []
for item in allowed_ips:
ip_address(item)
normalized.append(item)
self.allowed_ips = normalized
return self
raise ValueError("Unsupported binding mode.")

View File

@@ -5,18 +5,25 @@ import json
import logging import logging
import time import time
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, date, timedelta from datetime import date, timedelta
from typing import Callable from typing import Callable
from redis.asyncio import Redis from redis.asyncio import Redis
from sqlalchemy import func, select, text, update from sqlalchemy import func, select, update
from sqlalchemy.exc import SQLAlchemyError from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from app.config import RuntimeSettings, Settings from app.config import RuntimeSettings, Settings
from app.core.ip_utils import is_ip_in_network from app.core.ip_utils import is_ip_in_network
from app.core.security import hash_token, mask_token from app.core.security import hash_token, mask_token
from app.models.token_binding import STATUS_ACTIVE, STATUS_BANNED, TokenBinding from app.models.token_binding import (
BINDING_MODE_ALL,
BINDING_MODE_MULTIPLE,
BINDING_MODE_SINGLE,
STATUS_ACTIVE,
STATUS_BANNED,
TokenBinding,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -27,6 +34,8 @@ class BindingRecord:
token_hash: str token_hash: str
token_display: str token_display: str
bound_ip: str bound_ip: str
binding_mode: str
allowed_ips: list[str]
status: int status: int
ip_matched: bool ip_matched: bool
@@ -104,42 +113,101 @@ class BindingService:
def metrics_key(self, target_date: date) -> str: def metrics_key(self, target_date: date) -> str:
return f"sentinel:metrics:{target_date.isoformat()}" return f"sentinel:metrics:{target_date.isoformat()}"
def build_bound_ip_display(self, binding_mode: str, allowed_ips: list[str]) -> str:
if binding_mode == BINDING_MODE_ALL:
return "ALL"
if not allowed_ips:
return "-"
if binding_mode == BINDING_MODE_MULTIPLE:
return ", ".join(allowed_ips)
return allowed_ips[0]
def is_client_allowed(self, client_ip: str, binding_mode: str, allowed_ips: list[str]) -> bool:
if binding_mode == BINDING_MODE_ALL:
return True
return any(is_ip_in_network(client_ip, item) for item in allowed_ips)
def to_binding_record(self, binding: TokenBinding, client_ip: str) -> BindingRecord:
allowed_ips = [str(item) for item in binding.allowed_ips]
binding_mode = binding.binding_mode or BINDING_MODE_SINGLE
return BindingRecord(
id=binding.id,
token_hash=binding.token_hash,
token_display=binding.token_display,
bound_ip=binding.bound_ip,
binding_mode=binding_mode,
allowed_ips=allowed_ips,
status=binding.status,
ip_matched=self.is_client_allowed(client_ip, binding_mode, allowed_ips),
)
def denied_result(
self,
token_hash: str,
token_display: str,
bound_ip: str,
detail: str,
*,
should_alert: bool = True,
status_code: int = 403,
) -> BindingCheckResult:
return BindingCheckResult(
allowed=False,
status_code=status_code,
detail=detail,
token_hash=token_hash,
token_display=token_display,
bound_ip=bound_ip,
should_alert=should_alert,
)
def allowed_result(
self,
token_hash: str,
token_display: str,
bound_ip: str,
detail: str,
*,
newly_bound: bool = False,
) -> BindingCheckResult:
return BindingCheckResult(
allowed=True,
status_code=200,
detail=detail,
token_hash=token_hash,
token_display=token_display,
bound_ip=bound_ip,
newly_bound=newly_bound,
)
def evaluate_existing_record(
self,
record: BindingRecord,
token_hash: str,
token_display: str,
detail: str,
) -> BindingCheckResult:
if record.status == STATUS_BANNED:
return self.denied_result(token_hash, token_display, record.bound_ip, "Token is banned.")
if record.ip_matched:
self.record_last_used(token_hash)
return self.allowed_result(token_hash, token_display, record.bound_ip, detail)
return self.denied_result(
token_hash,
token_display,
record.bound_ip,
"Client IP does not match the allowed binding rule.",
)
async def evaluate_token_binding(self, token: str, client_ip: str) -> BindingCheckResult: async def evaluate_token_binding(self, token: str, client_ip: str) -> BindingCheckResult:
token_hash = hash_token(token, self.settings.sentinel_hmac_secret) token_hash = hash_token(token, self.settings.sentinel_hmac_secret)
token_display = mask_token(token) token_display = mask_token(token)
cache_hit, cache_available = await self._load_binding_from_cache(token_hash) cache_hit, cache_available = await self._load_binding_from_cache(token_hash, client_ip)
if cache_hit is not None: if cache_hit is not None:
if cache_hit.status == STATUS_BANNED: if cache_hit.ip_matched:
return BindingCheckResult(
allowed=False,
status_code=403,
detail="Token is banned.",
token_hash=token_hash,
token_display=token_display,
bound_ip=cache_hit.bound_ip,
should_alert=True,
)
if is_ip_in_network(client_ip, cache_hit.bound_ip):
await self._touch_cache(token_hash) await self._touch_cache(token_hash)
self.record_last_used(token_hash) return self.evaluate_existing_record(cache_hit, token_hash, token_display, "Allowed from cache.")
return BindingCheckResult(
allowed=True,
status_code=200,
detail="Allowed from cache.",
token_hash=token_hash,
token_display=token_display,
bound_ip=cache_hit.bound_ip,
)
return BindingCheckResult(
allowed=False,
status_code=403,
detail="Client IP does not match the bound CIDR.",
token_hash=token_hash,
token_display=token_display,
bound_ip=cache_hit.bound_ip,
should_alert=True,
)
if not cache_available: if not cache_available:
logger.warning("Redis is unavailable. Falling back to PostgreSQL for token binding.") logger.warning("Redis is unavailable. Falling back to PostgreSQL for token binding.")
@@ -159,36 +227,8 @@ class BindingService:
return self._handle_backend_failure(token_hash, token_display) return self._handle_backend_failure(token_hash, token_display)
if record is not None: if record is not None:
await self.sync_binding_cache(record.token_hash, record.bound_ip, record.status) await self.sync_binding_cache(record.token_hash, record.bound_ip, record.binding_mode, record.allowed_ips, record.status)
if record.status == STATUS_BANNED: return self.evaluate_existing_record(record, token_hash, token_display, "Allowed from PostgreSQL.")
return BindingCheckResult(
allowed=False,
status_code=403,
detail="Token is banned.",
token_hash=token_hash,
token_display=token_display,
bound_ip=record.bound_ip,
should_alert=True,
)
if record.ip_matched:
self.record_last_used(token_hash)
return BindingCheckResult(
allowed=True,
status_code=200,
detail="Allowed from PostgreSQL.",
token_hash=token_hash,
token_display=token_display,
bound_ip=record.bound_ip,
)
return BindingCheckResult(
allowed=False,
status_code=403,
detail="Client IP does not match the bound CIDR.",
token_hash=token_hash,
token_display=token_display,
bound_ip=record.bound_ip,
should_alert=True,
)
try: try:
created = await self._create_binding(token_hash, token_display, client_ip) created = await self._create_binding(token_hash, token_display, client_ip)
@@ -202,52 +242,42 @@ class BindingService:
return self._handle_backend_failure(token_hash, token_display) return self._handle_backend_failure(token_hash, token_display)
if existing is None: if existing is None:
return self._handle_backend_failure(token_hash, token_display) return self._handle_backend_failure(token_hash, token_display)
await self.sync_binding_cache(existing.token_hash, existing.bound_ip, existing.status) await self.sync_binding_cache(
if existing.status == STATUS_BANNED: existing.token_hash,
return BindingCheckResult( existing.bound_ip,
allowed=False, existing.binding_mode,
status_code=403, existing.allowed_ips,
detail="Token is banned.", existing.status,
token_hash=token_hash,
token_display=token_display,
bound_ip=existing.bound_ip,
should_alert=True,
)
if existing.ip_matched:
self.record_last_used(token_hash)
return BindingCheckResult(
allowed=True,
status_code=200,
detail="Allowed after concurrent bind resolution.",
token_hash=token_hash,
token_display=token_display,
bound_ip=existing.bound_ip,
)
return BindingCheckResult(
allowed=False,
status_code=403,
detail="Client IP does not match the bound CIDR.",
token_hash=token_hash,
token_display=token_display,
bound_ip=existing.bound_ip,
should_alert=True,
) )
return self.evaluate_existing_record(existing, token_hash, token_display, "Allowed after concurrent bind resolution.")
await self.sync_binding_cache(created.token_hash, created.bound_ip, created.status) await self.sync_binding_cache(
return BindingCheckResult( created.token_hash,
allowed=True, created.bound_ip,
status_code=200, created.binding_mode,
detail="First-use bind created.", created.allowed_ips,
token_hash=token_hash, created.status,
token_display=token_display,
bound_ip=created.bound_ip,
newly_bound=True,
) )
return self.allowed_result(token_hash, token_display, created.bound_ip, "First-use bind created.", newly_bound=True)
async def sync_binding_cache(self, token_hash: str, bound_ip: str, status_code: int) -> None: async def sync_binding_cache(
self,
token_hash: str,
bound_ip: str,
binding_mode: str,
allowed_ips: list[str],
status_code: int,
) -> None:
if self.redis is None: if self.redis is None:
return return
payload = json.dumps({"bound_ip": bound_ip, "status": status_code}) payload = json.dumps(
{
"bound_ip": bound_ip,
"binding_mode": binding_mode,
"allowed_ips": allowed_ips,
"status": status_code,
}
)
try: try:
await self.redis.set(self.cache_key(token_hash), payload, ex=self.settings.redis_binding_ttl_seconds) await self.redis.set(self.cache_key(token_hash), payload, ex=self.settings.redis_binding_ttl_seconds)
except Exception: except Exception:
@@ -336,7 +366,7 @@ class BindingService:
) )
return series return series
async def _load_binding_from_cache(self, token_hash: str) -> tuple[BindingRecord | None, bool]: async def _load_binding_from_cache(self, token_hash: str, client_ip: str) -> tuple[BindingRecord | None, bool]:
if self.redis is None: if self.redis is None:
return None, False return None, False
try: try:
@@ -348,14 +378,18 @@ class BindingService:
return None, True return None, True
data = json.loads(raw) data = json.loads(raw)
allowed_ips = [str(item) for item in data.get("allowed_ips", [])]
binding_mode = str(data.get("binding_mode", BINDING_MODE_SINGLE))
return ( return (
BindingRecord( BindingRecord(
id=0, id=0,
token_hash=token_hash, token_hash=token_hash,
token_display="", token_display="",
bound_ip=data["bound_ip"], bound_ip=str(data.get("bound_ip", self.build_bound_ip_display(binding_mode, allowed_ips))),
binding_mode=binding_mode,
allowed_ips=allowed_ips,
status=int(data["status"]), status=int(data["status"]),
ip_matched=False, ip_matched=self.is_client_allowed(client_ip, binding_mode, allowed_ips),
), ),
True, True,
) )
@@ -369,69 +403,33 @@ class BindingService:
logger.warning("Failed to extend binding cache TTL.", extra={"token_hash": token_hash}) logger.warning("Failed to extend binding cache TTL.", extra={"token_hash": token_hash})
async def _load_binding_from_db(self, token_hash: str, client_ip: str) -> BindingRecord | None: async def _load_binding_from_db(self, token_hash: str, client_ip: str) -> BindingRecord | None:
query = text(
"""
SELECT
id,
token_hash,
token_display,
bound_ip::text AS bound_ip,
status,
CAST(:client_ip AS inet) << bound_ip AS ip_matched
FROM token_bindings
WHERE token_hash = :token_hash
LIMIT 1
"""
)
async with self.session_factory() as session: async with self.session_factory() as session:
result = await session.execute(query, {"token_hash": token_hash, "client_ip": client_ip}) binding = await session.scalar(select(TokenBinding).where(TokenBinding.token_hash == token_hash).limit(1))
row = result.mappings().first() if binding is None:
if row is None:
return None return None
return BindingRecord( return self.to_binding_record(binding, client_ip)
id=int(row["id"]),
token_hash=str(row["token_hash"]),
token_display=str(row["token_display"]),
bound_ip=str(row["bound_ip"]),
status=int(row["status"]),
ip_matched=bool(row["ip_matched"]),
)
async def _create_binding(self, token_hash: str, token_display: str, client_ip: str) -> BindingRecord | None: async def _create_binding(self, token_hash: str, token_display: str, client_ip: str) -> BindingRecord | None:
statement = text(
"""
INSERT INTO token_bindings (token_hash, token_display, bound_ip, status)
VALUES (:token_hash, :token_display, CAST(:bound_ip AS cidr), :status)
ON CONFLICT (token_hash) DO NOTHING
RETURNING id, token_hash, token_display, bound_ip::text AS bound_ip, status
"""
)
async with self.session_factory() as session: async with self.session_factory() as session:
try: try:
result = await session.execute( binding = TokenBinding(
statement, token_hash=token_hash,
{ token_display=token_display,
"token_hash": token_hash, bound_ip=client_ip,
"token_display": token_display, binding_mode=BINDING_MODE_SINGLE,
"bound_ip": client_ip, allowed_ips=[client_ip],
"status": STATUS_ACTIVE, status=STATUS_ACTIVE,
},
) )
row = result.mappings().first() session.add(binding)
await session.flush()
await session.commit() await session.commit()
except SQLAlchemyError: await session.refresh(binding)
except SQLAlchemyError as exc:
await session.rollback() await session.rollback()
if "duplicate key" in str(exc).lower() or "unique" in str(exc).lower():
return None
raise raise
if row is None: return self.to_binding_record(binding, client_ip)
return None
return BindingRecord(
id=int(row["id"]),
token_hash=str(row["token_hash"]),
token_display=str(row["token_display"]),
bound_ip=str(row["bound_ip"]),
status=int(row["status"]),
ip_matched=True,
)
def _handle_backend_failure(self, token_hash: str, token_display: str) -> BindingCheckResult: def _handle_backend_failure(self, token_hash: str, token_display: str) -> BindingCheckResult:
runtime_settings = self.runtime_settings_getter() runtime_settings = self.runtime_settings_getter()

View File

@@ -4,20 +4,22 @@ CREATE TABLE token_bindings (
id BIGSERIAL PRIMARY KEY, id BIGSERIAL PRIMARY KEY,
token_hash VARCHAR(64) NOT NULL UNIQUE, token_hash VARCHAR(64) NOT NULL UNIQUE,
token_display VARCHAR(20) NOT NULL, token_display VARCHAR(20) NOT NULL,
bound_ip CIDR NOT NULL, bound_ip TEXT NOT NULL,
binding_mode VARCHAR(16) NOT NULL DEFAULT 'single',
allowed_ips JSONB NOT NULL DEFAULT '[]'::jsonb,
status SMALLINT NOT NULL DEFAULT 1, status SMALLINT NOT NULL DEFAULT 1,
first_used_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), first_used_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
last_used_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), last_used_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
CREATE INDEX idx_token_bindings_hash ON token_bindings(token_hash); CREATE INDEX idx_token_bindings_hash ON token_bindings(token_hash);
CREATE INDEX idx_token_bindings_ip ON token_bindings USING GIST (bound_ip inet_ops); CREATE INDEX idx_token_bindings_ip ON token_bindings(bound_ip);
CREATE TABLE intercept_logs ( CREATE TABLE intercept_logs (
id BIGSERIAL PRIMARY KEY, id BIGSERIAL PRIMARY KEY,
token_hash VARCHAR(64) NOT NULL, token_hash VARCHAR(64) NOT NULL,
token_display VARCHAR(20) NOT NULL, token_display VARCHAR(20) NOT NULL,
bound_ip CIDR NOT NULL, bound_ip TEXT NOT NULL,
attempt_ip INET NOT NULL, attempt_ip INET NOT NULL,
alerted BOOLEAN NOT NULL DEFAULT FALSE, alerted BOOLEAN NOT NULL DEFAULT FALSE,
intercepted_at TIMESTAMPTZ NOT NULL DEFAULT NOW() intercepted_at TIMESTAMPTZ NOT NULL DEFAULT NOW()

View File

@@ -33,7 +33,8 @@ const route = useRoute()
const router = useRouter() const router = useRouter()
const form = reactive({ const form = reactive({
id: null, id: null,
bound_ip: '', binding_mode: 'single',
allowed_ips_text: '',
}) })
const filters = reactive({ const filters = reactive({
token_suffix: '', token_suffix: '',
@@ -170,16 +171,6 @@ function statusText(row) {
return isDormant(row) ? '沉寂' : '正常' return isDormant(row) ? '沉寂' : '正常'
} }
function ipTypeLabel(boundIp) {
if (!boundIp) {
return '未知'
}
if (!boundIp.includes('/')) {
return '单个 IP'
}
return boundIp.endsWith('/32') || boundIp.endsWith('/128') ? '单个 IP' : 'CIDR 网段'
}
function rowClassName({ row }) { function rowClassName({ row }) {
if (row.status === 2) { if (row.status === 2) {
return 'binding-row--banned' return 'binding-row--banned'
@@ -238,18 +229,52 @@ async function searchBindings() {
function openEdit(row) { function openEdit(row) {
form.id = row.id form.id = row.id
form.bound_ip = row.bound_ip form.binding_mode = row.binding_mode
form.allowed_ips_text = (row.allowed_ips || []).join('\n')
dialogVisible.value = true dialogVisible.value = true
} }
function normalizeAllowedIpText(value) {
return value
.split(/[\n,]/)
.map((item) => item.trim())
.filter(Boolean)
}
function bindingModeLabel(mode) {
if (mode === 'all') {
return '全部放行'
}
if (mode === 'multiple') {
return '多 IP'
}
return '单地址'
}
function bindingRuleText(row) {
if (row.binding_mode === 'all') {
return '全部 IP 放行'
}
return row.bound_ip
}
async function submitEdit() { async function submitEdit() {
if (!form.bound_ip) { const allowedIps = normalizeAllowedIpText(form.allowed_ips_text)
ElMessage.warning('请输入 CIDR 或单个 IP。') if (form.binding_mode !== 'all' && !allowedIps.length) {
ElMessage.warning('请填写至少一个 IP 或 CIDR。')
return return
} }
try { try {
await run(() => updateBindingIp({ id: form.id, bound_ip: form.bound_ip }), '更新绑定失败。') await run(
ElMessage.success('绑定地址已更新。') () =>
updateBindingIp({
id: form.id,
binding_mode: form.binding_mode,
allowed_ips: allowedIps,
}),
'更新绑定失败。',
)
ElMessage.success('绑定规则已更新。')
dialogVisible.value = false dialogVisible.value = false
await refreshBindings() await refreshBindings()
} catch {} } catch {}
@@ -297,7 +322,7 @@ watch(
<PageHero <PageHero
eyebrow="绑定控制" eyebrow="绑定控制"
title="围绕绑定表格完成查询、核对与处置" title="围绕绑定表格完成查询、核对与处置"
description="按 Token 尾号或绑定地址快速检索,确认最近活跃时间后直接编辑 CIDR、解绑或封禁。" description="按 Token 尾号或绑定地址快速检索,确认最近活跃时间后直接编辑规则、解绑或封禁。"
> >
<template #aside> <template #aside>
<div class="hero-stat-pair"> <div class="hero-stat-pair">
@@ -321,7 +346,7 @@ watch(
<div class="binding-head-copy"> <div class="binding-head-copy">
<p class="eyebrow">绑定列表</p> <p class="eyebrow">绑定列表</p>
<h3 class="section-title">聚焦表格本身减少干扰信息</h3> <h3 class="section-title">聚焦表格本身减少干扰信息</h3>
<p class="muted">页面只保留查询状态和处置动作方便快速完成 IP 管理</p> <p class="muted">支持单地址多个 IP 与全部放行三种规则页面只保留高频查询与处置动作</p>
</div> </div>
<div class="binding-summary-strip" aria-label="Binding summary"> <div class="binding-summary-strip" aria-label="Binding summary">
<article class="binding-summary-card"> <article class="binding-summary-card">
@@ -423,7 +448,7 @@ watch(
<el-icon><SwitchButton /></el-icon> <el-icon><SwitchButton /></el-icon>
当前匹配 {{ formatCompactNumber(total) }} 条绑定 当前匹配 {{ formatCompactNumber(total) }} 条绑定
</span> </span>
<span class="binding-table-note">沉寂表示 {{ staleWindowDays }} 天及以上没有请求</span> <span class="binding-table-note">沉寂表示 {{ staleWindowDays }} 天及以上没有请求规则支持单地址 IP 与全部放行</span>
</div> </div>
</div> </div>
@@ -445,10 +470,10 @@ watch(
<template #default="{ row }"> <template #default="{ row }">
<div class="binding-ip-cell"> <div class="binding-ip-cell">
<div class="binding-ip-line"> <div class="binding-ip-line">
<code>{{ row.bound_ip }}</code> <code>{{ bindingRuleText(row) }}</code>
<el-button text :icon="CopyDocument" @click="copyValue(row.bound_ip, '绑定地址')">复制</el-button> <el-button text :icon="CopyDocument" @click="copyValue(bindingRuleText(row), '绑定规则')">复制</el-button>
</div> </div>
<span class="muted">{{ ipTypeLabel(row.bound_ip) }}</span> <span class="muted">{{ bindingModeLabel(row.binding_mode) }}</span>
</div> </div>
</template> </template>
</el-table-column> </el-table-column>
@@ -476,7 +501,7 @@ watch(
<el-table-column label="操作" min-width="360" fixed="right"> <el-table-column label="操作" min-width="360" fixed="right">
<template #default="{ row }"> <template #default="{ row }">
<div class="binding-action-row"> <div class="binding-action-row">
<el-button :icon="EditPen" @click="openEdit(row)">编辑 CIDR</el-button> <el-button :icon="EditPen" @click="openEdit(row)">编辑规则</el-button>
<el-button <el-button
:icon="row.status === 1 ? Lock : Unlock" :icon="row.status === 1 ? Lock : Unlock"
:type="row.status === 1 ? 'warning' : 'success'" :type="row.status === 1 ? 'warning' : 'success'"
@@ -517,17 +542,47 @@ watch(
</div> </div>
</section> </section>
<el-dialog v-model="dialogVisible" title="更新绑定地址" width="420px"> <el-dialog v-model="dialogVisible" title="更新绑定规则" width="520px">
<el-form label-position="top"> <el-form label-position="top">
<el-form-item label="CIDR 或单个 IP"> <el-form-item label="规则模式">
<el-radio-group v-model="form.binding_mode" class="binding-mode-group">
<el-radio-button value="single">单地址</el-radio-button>
<el-radio-button value="multiple">多个 IP</el-radio-button>
<el-radio-button value="all">全部放行</el-radio-button>
</el-radio-group>
</el-form-item>
<el-form-item v-if="form.binding_mode === 'single'" label="IP 或 CIDR">
<el-input <el-input
v-model="form.bound_ip" v-model="form.allowed_ips_text"
autocomplete="off" autocomplete="off"
name="bound_ip" name="bound_ip"
placeholder="192.168.1.0/24" placeholder="192.168.1.0/24"
@keyup.enter="submitEdit" @keyup.enter="submitEdit"
/> />
</el-form-item> </el-form-item>
<el-form-item v-else-if="form.binding_mode === 'multiple'" label="多个 IP">
<el-input
v-model="form.allowed_ips_text"
type="textarea"
:rows="6"
autocomplete="off"
name="allowed_ips"
placeholder="每行一个 IP例如&#10;192.168.1.10&#10;192.168.1.11"
/>
</el-form-item>
<el-alert
v-else
type="warning"
:closable="false"
title="全部放行后,这个 Token 不再校验来源 IP。仅建议在确有必要的内部场景中使用。"
/>
<p class="muted">
单地址模式支持单个 IP 或一个 CIDR IP 模式按逐行 IP 精确放行全部放行表示跳过来源地址校验
</p>
</el-form> </el-form>
<template #footer> <template #footer>