diff --git a/crawler/README.md b/crawler/README.md
index c353c6e..e9bf003 100644
--- a/crawler/README.md
+++ b/crawler/README.md
@@ -102,6 +102,18 @@ print('抓取:', n_fetched, '去重新增:', n_news, '面板写入:', n_panel)
有网络且有关键词命中时,应看到非零数字;再查 `curl -s http://localhost:3001/api/situation` 或前端事件脉络是否出现新数据。
+**按时间范围测试(例如 2 月 28 日 0 时至今)**:RSS 流水线支持只保留指定起始时间之后的条目,便于测试「从某日 0 点到现在」的数据。
+
+```bash
+# 默认从 2026-02-28 0:00 到现在
+npm run crawler:once:range
+
+# 或指定起始时间
+./scripts/run-crawler-range.sh 2026-02-28T00:00:00
+```
+
+需设置环境变量 `CRAWL_START_DATE`(ISO 时间,如 `2026-02-28T00:00:00`)。GDELT 时间范围在启动 gdelt 服务时设置,例如:`GDELT_TIMESPAN=3d npm run gdelt`(最近 3 天)。
+
### 4. 仅测提取逻辑(不写库)
```bash
@@ -180,6 +192,69 @@ RSS → 抓取 → 清洗 → 去重 → 写 news_content / situation_update /
---
+## 主要新闻资讯来源(RSS)
+
+配置在 `crawler/config.py` 的 `RSS_FEEDS`,当前包含:
+
+| 来源 | URL / 说明 |
+|------|------------|
+| **美国** | Reuters Top News、NYT World |
+| **英国** | BBC World、BBC Middle East、The Guardian World |
+| **法国** | France 24 |
+| **德国** | DW World |
+| **俄罗斯** | TASS、RT |
+| **中国** | Xinhua World、CGTN World |
+| **凤凰** | 凤凰军事、凤凰国际(feedx.net 镜像) |
+| **伊朗** | Press TV |
+| **卡塔尔/中东** | Al Jazeera All、Al Jazeera Middle East |
+
+单源超时由 `FEED_TIMEOUT`(默认 12 秒)控制;某源失败不影响其他源。
+
+**过滤**:每条条目的标题+摘要必须命中 `config.KEYWORDS` 中至少一个关键词才会进入流水线(伊朗/美国/中东/军事/基地/霍尔木兹等,见 `config.KEYWORDS`)。
+
+### 境内可访问情况(仅供参考,以实际网络为准)
+
+| 通常境内可直接访问 | 说明 |
+|-------------------|------|
+| **新华网** `english.news.cn/rss/world.xml` | 中国官方外文社 |
+| **CGTN** `cgtn.com/rss/world` | 中国国际台 |
+| **凤凰** `feedx.net/rss/ifengmil.xml`、`ifengworld.xml` | 第三方 RSS 镜像,中文军事/国际 |
+| **人民网** `people.com.cn/rss/military.xml`、`world.xml` | 军事、国际 |
+| **新浪** `rss.sina.com.cn` 军事/新闻 | 新浪军事、新浪新闻滚动 |
+| **中国日报** `chinadaily.com.cn/rss/world_rss.xml` | 国际新闻 |
+| **中国军网** `english.chinamil.com.cn/rss.xml` | 解放军报英文 |
+| **俄通社 TASS** `tass.com/rss/v2.xml` | 俄罗斯官媒 |
+| **RT** `rt.com/rss/` | 俄罗斯今日俄罗斯 |
+| **DW** `rss.dw.com/xml/rss-en-world` | 德国之声,部分地区/时段可访问 |
+
+**境内常需代理**:Reuters、NYT、BBC、Guardian、France 24、Al Jazeera、Press TV 等境外主站 RSS,直连易超时或被墙。境内部署建议:设 `CRAWLER_USE_PROXY=1` 并配置代理,或仅保留上表源(可在 `config.py` 中注释掉不可达的 URL,减少超时等待)。
+
+**国内其他媒体(今日头条、网易、腾讯、新浪微博等)**:今日头条、腾讯新闻、新浪微博等多为 App/信息流产品,**无官方公开 RSS**。如需接入可考虑:第三方 RSS 聚合(如 FeedX、RSSHub 等若有对应频道)、或平台开放 API(若有且合规使用)。当前爬虫已加入新浪(rss.sina.com.cn)、人民网、中国日报、中国军网等有明确 RSS 的境内源;网易新闻曾有 RSS 中心页,具体栏目 XML 需在其订阅页查找后加入 `config.py`。
+
+---
+
+## 为什么爬虫一直抓不到有效信息(0 条)
+
+常见原因与应对如下。
+
+| 原因 | 说明 | 建议 |
+|------|------|------|
+| **RSS 源在国内不可达** | 多数源为境外站(Reuters、BBC、NYT、Guardian、France24、DW、TASS、RT、Al Jazeera、Press TV 等),国内直连易超时或被墙。 | 使用代理:设 `CRAWLER_USE_PROXY=1` 并配置系统/环境 HTTP(S) 代理,或部署到海外服务器再跑爬虫。 |
+| **关键词无一命中** | 只有标题或摘要里包含 `KEYWORDS` 中至少一个词才会保留(如 iran、usa、middle east、strike、基地 等)。若当前头条都不涉及美伊/中东,整轮会 0 条。 | 先跑 `npm run crawler:test` 看是否 0 条;若长期为 0 且网络正常,可在 `config.py` 中适当放宽或增加 `KEYWORDS`(如增加通用词做测试)。 |
+| **单源超时导致整轮无结果** | 若所有源都在 `FEED_TIMEOUT` 内未返回,则每源返回空列表,汇总仍为 0 条。 | 增大 `FEED_TIMEOUT`(如 20);或先单独用浏览器/curl 测某条 RSS URL 是否可访问;国内建议代理后再试。 |
+| **分类/清洗依赖 AI 且失败** | 每条命中关键词的条目会调 `classify_and_severity`(Ollama 或 DashScope)。若本机未起 Ollama、未设 DashScope,且规则兜底异常,可能影响该条。 | 设 `PARSER_AI_DISABLED=1` 使用纯规则分类,避免依赖 Ollama/DashScope;或配置好 `DASHSCOPE_API_KEY` / 本地 Ollama 再跑。 |
+| **去重后无新增** | 抓到的条数 >0,但经 `news_content` 的 content_hash 去重后「新增」为 0,则不会写 `situation_update`,事件脉络不增加。 | 属正常:同一批新闻再次抓取不会重复写入。等有新头条命中关键词后才会出现新条目。 |
+
+**快速自检**:
+
+```bash
+npm run crawler:test
+```
+
+输出「RSS 抓取: N 条」。若始终为 0,优先检查网络/代理与 `KEYWORDS`;若 N>0 但面板无新事件,多为去重后无新增或未调 `POST /api/crawler/notify`。
+
+---
+
## 优化后验证效果示例
以下为「正文抓取 + AI 精确提取 + 增量与地点更新」优化后,单条新闻从输入到前端展示的完整示例,便于对照验证。
diff --git a/crawler/__pycache__/article_fetcher.cpython-39.pyc b/crawler/__pycache__/article_fetcher.cpython-39.pyc
new file mode 100644
index 0000000..d7c0337
Binary files /dev/null and b/crawler/__pycache__/article_fetcher.cpython-39.pyc differ
diff --git a/crawler/__pycache__/config.cpython-311.pyc b/crawler/__pycache__/config.cpython-311.pyc
index f1aa1f4..1dff55a 100644
Binary files a/crawler/__pycache__/config.cpython-311.pyc and b/crawler/__pycache__/config.cpython-311.pyc differ
diff --git a/crawler/__pycache__/config.cpython-39.pyc b/crawler/__pycache__/config.cpython-39.pyc
index 2127899..38409e4 100644
Binary files a/crawler/__pycache__/config.cpython-39.pyc and b/crawler/__pycache__/config.cpython-39.pyc differ
diff --git a/crawler/__pycache__/pipeline.cpython-311.pyc b/crawler/__pycache__/pipeline.cpython-311.pyc
index d06eb08..5af738e 100644
Binary files a/crawler/__pycache__/pipeline.cpython-311.pyc and b/crawler/__pycache__/pipeline.cpython-311.pyc differ
diff --git a/crawler/__pycache__/pipeline.cpython-39.pyc b/crawler/__pycache__/pipeline.cpython-39.pyc
index 022b9c6..5491a58 100644
Binary files a/crawler/__pycache__/pipeline.cpython-39.pyc and b/crawler/__pycache__/pipeline.cpython-39.pyc differ
diff --git a/crawler/__pycache__/realtime_conflict_service.cpython-39.pyc b/crawler/__pycache__/realtime_conflict_service.cpython-39.pyc
index e7c6f43..ce16d42 100644
Binary files a/crawler/__pycache__/realtime_conflict_service.cpython-39.pyc and b/crawler/__pycache__/realtime_conflict_service.cpython-39.pyc differ
diff --git a/crawler/config.py b/crawler/config.py
index ee34a26..2904414 100644
--- a/crawler/config.py
+++ b/crawler/config.py
@@ -42,6 +42,13 @@ RSS_FEEDS = [
# 凤凰网(军事 + 国际,中文视角)
{"name": "凤凰军事", "url": "https://feedx.net/rss/ifengmil.xml"},
{"name": "凤凰国际", "url": "https://feedx.net/rss/ifengworld.xml"},
+ # 境内媒体(境内直连友好,可补中文视角)
+ {"name": "人民网军事", "url": "http://www.people.com.cn/rss/military.xml"},
+ {"name": "人民网国际", "url": "http://www.people.com.cn/rss/world.xml"},
+ {"name": "新浪军事", "url": "http://rss.sina.com.cn/rss/jczs/index.shtml"},
+ {"name": "新浪新闻", "url": "http://rss.sina.com.cn/rss/roll/news.xml"},
+ {"name": "中国日报国际", "url": "http://www.chinadaily.com.cn/rss/world_rss.xml"},
+ {"name": "中国军网", "url": "https://english.chinamil.com.cn/rss.xml"},
# 伊朗
"https://www.presstv.ir/rss",
# 卡塔尔(中东)
diff --git a/crawler/pipeline.py b/crawler/pipeline.py
index 45f7fd7..da7d680 100644
--- a/crawler/pipeline.py
+++ b/crawler/pipeline.py
@@ -109,6 +109,33 @@ def run_full_pipeline(
if not items:
return 0, 0, 0
+ # 可选:仅保留指定起始时间之后的条目(如 CRAWL_START_DATE=2026-02-28T00:00:00)
+ start_date_env = os.environ.get("CRAWL_START_DATE", "").strip()
+ if start_date_env:
+ try:
+ raw = start_date_env.replace("Z", "+00:00").strip()
+ start_dt = datetime.fromisoformat(raw)
+ if start_dt.tzinfo is None:
+ start_dt = start_dt.replace(tzinfo=timezone.utc)
+ else:
+ start_dt = start_dt.astimezone(timezone.utc)
+ before = len(items)
+ items = [it for it in items if (it.get("published") or datetime.min.replace(tzinfo=timezone.utc)) >= start_dt]
+ if before > len(items):
+ print(f" [pipeline] 按 CRAWL_START_DATE={start_date_env} 过滤后保留 {len(items)} 条(原 {before} 条)")
+ except Exception as e:
+ print(f" [warn] CRAWL_START_DATE 解析失败,忽略: {e}")
+
+ if not items:
+ return 0, 0, 0
+ n_total = len(items)
+ print(f" [pipeline] 抓取 {n_total} 条")
+ for i, it in enumerate(items[:5]):
+ title = (it.get("title") or it.get("summary") or "").strip()[:60]
+ print(f" [{i + 1}] {title}" + ("…" if len((it.get("title") or it.get("summary") or "")[:60]) >= 60 else ""))
+ if n_total > 5:
+ print(f" ... 共 {n_total} 条")
+
# 2. 清洗(标题/摘要/分类,符合面板 schema)
if translate:
from translate_utils import translate_to_chinese
@@ -128,6 +155,11 @@ def run_full_pipeline(
# 3. 去重:落库 news_content,仅新项返回
new_items, n_news = save_and_dedup(items, db_path=path)
+ if new_items:
+ print(f" [pipeline] 去重后新增 {n_news} 条,写入事件脉络 {len(new_items)} 条")
+ for i, it in enumerate(new_items[:3]):
+ title = (it.get("title") or it.get("summary") or "").strip()[:55]
+ print(f" 新增 [{i + 1}] {title}" + ("…" if len((it.get("title") or it.get("summary") or "").strip()) > 55 else ""))
# 3.5 数据增强:为参与 AI 提取的条目抓取正文,便于从全文提取精确数据(伤亡、基地等)
if new_items:
diff --git a/crawler/realtime_conflict_service.py b/crawler/realtime_conflict_service.py
index 5a54a1f..abeaab5 100644
--- a/crawler/realtime_conflict_service.py
+++ b/crawler/realtime_conflict_service.py
@@ -313,8 +313,10 @@ def fetch_news() -> None:
if GDELT_DISABLED:
_rss_to_gdelt_fallback()
_notify_node()
- if n_fetched > 0:
- print(f"[{datetime.now().strftime('%H:%M:%S')}] RSS 抓取 {n_fetched} 条,去重后新增 {n_news} 条资讯,面板 {n_panel} 条")
+ ts = datetime.now().strftime("%H:%M:%S")
+ print(f"[{ts}] RSS 抓取 {n_fetched} 条,去重后新增 {n_news} 条资讯,写入事件脉络 {n_panel} 条")
+ if n_fetched == 0:
+ print(f"[{ts}] (0 条:检查网络、RSS 源或 KEYWORDS 过滤)")
except Exception as e:
LAST_FETCH["error"] = str(e)
print(f"[{datetime.now().strftime('%H:%M:%S')}] 新闻抓取失败: {e}")
@@ -433,10 +435,8 @@ def _get_conflict_stats() -> dict:
@app.on_event("startup")
async def startup():
+ """仅启动后台定时任务,不阻塞首次抓取,避免启动超时(验证脚本 /crawler/status 可尽快就绪)"""
global _bg_task
- loop = asyncio.get_event_loop()
- await loop.run_in_executor(None, fetch_news)
- await loop.run_in_executor(None, fetch_gdelt_events)
_bg_task = asyncio.create_task(_periodic_fetch())
diff --git a/crawler/run_once.py b/crawler/run_once.py
new file mode 100644
index 0000000..2b5edfa
--- /dev/null
+++ b/crawler/run_once.py
@@ -0,0 +1,51 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+单独运行爬虫一轮:抓取 → 清洗 → 去重 → 写库 → 通知 Node(可选)
+终端直接输出抓取条数及内容摘要,便于排查。
+用法(项目根或 crawler 目录):
+ python run_once.py
+ python -c "import run_once; run_once.main()"
+或: npm run crawler:once
+"""
+import os
+import sys
+from datetime import datetime
+
+# 保证可导入同目录模块
+if __name__ == "__main__":
+ sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
+
+def main():
+ from config import DB_PATH, API_BASE
+ from pipeline import run_full_pipeline
+
+ crawl_start = os.environ.get("CRAWL_START_DATE", "").strip()
+ print("========================================")
+ print("爬虫单次运行(RSS → 清洗 → 去重 → 写库)")
+ print("DB:", DB_PATH)
+ print("API_BASE:", API_BASE)
+ if crawl_start:
+ print("时间范围: 仅保留 CRAWL_START_DATE 之后:", crawl_start)
+ print("========================================\n")
+
+ n_fetched, n_news, n_panel = run_full_pipeline(
+ db_path=DB_PATH,
+ api_base=API_BASE,
+ translate=True,
+ notify=True,
+ )
+
+ print("")
+ print("----------------------------------------")
+ print("本轮结果:")
+ print(f" 抓取: {n_fetched} 条")
+ print(f" 去重后新增资讯: {n_news} 条")
+ print(f" 写入事件脉络: {n_panel} 条")
+ if n_fetched == 0:
+ print(" (0 条:检查网络、RSS 源或 config.KEYWORDS 过滤)")
+ print("----------------------------------------")
+ return 0
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/package.json b/package.json
index 6e3c9d0..62ce4fc 100644
--- a/package.json
+++ b/package.json
@@ -10,6 +10,8 @@
"api:seed": "node server/seed.js",
"crawler": "cd crawler && python main.py",
"gdelt": "cd crawler && uvicorn realtime_conflict_service:app --host 0.0.0.0 --port 8000",
+ "crawler:once": "cd crawler && python run_once.py",
+ "crawler:once:range": "./scripts/run-crawler-range.sh",
"crawler:test": "cd crawler && python3 -c \"import sys; sys.path.insert(0,'.'); from scrapers.rss_scraper import fetch_all; n=len(fetch_all()); print('RSS 抓取:', n, '条' if n else '(0 条,检查网络或关键词过滤)')\"",
"crawler:test:extraction": "cd crawler && python3 -m pytest tests/test_extraction.py -v",
"build": "vite build",
@@ -17,7 +19,9 @@
"lint": "eslint .",
"preview": "vite preview",
"verify": "./scripts/verify-pipeline.sh",
- "verify:full": "./scripts/verify-pipeline.sh --start-crawler"
+ "verify:full": "./scripts/verify-pipeline.sh --start-crawler",
+ "verify-panels": "node scripts/verify-panels.cjs",
+ "check-crawler-data": "node scripts/check-crawler-data.cjs"
},
"dependencies": {
"cors": "^2.8.5",
diff --git a/server/data.db-shm b/server/data.db-shm
deleted file mode 100644
index e1d2b9b..0000000
Binary files a/server/data.db-shm and /dev/null differ
diff --git a/server/data.db-wal b/server/data.db-wal
deleted file mode 100644
index 77969ab..0000000
Binary files a/server/data.db-wal and /dev/null differ
diff --git a/server/openapi.js b/server/openapi.js
index e46b3d8..65f73a0 100644
--- a/server/openapi.js
+++ b/server/openapi.js
@@ -50,7 +50,7 @@ module.exports = {
'/api/visit': {
post: {
summary: '来访统计',
- description: '记录 IP,返回当前在看人数和累积访问',
+ description: '记录 IP,返回当前在看人数和看过人数',
tags: ['统计'],
responses: {
200: {
diff --git a/server/routes.js b/server/routes.js
index c9fd475..25dc5f0 100644
--- a/server/routes.js
+++ b/server/routes.js
@@ -98,19 +98,29 @@ router.get('/situation', (req, res) => {
}
})
-// 来访统计:记录 IP,返回在看/累积
+// 来访统计:记录 IP(或开发环境下每标签 viewer-id),返回在看/看过
function getClientIp(req) {
const forwarded = req.headers['x-forwarded-for']
if (forwarded) return forwarded.split(',')[0].trim()
return req.ip || req.socket?.remoteAddress || 'unknown'
}
+function getVisitKey(req) {
+ const vid = req.headers['x-viewer-id']
+ const ip = getClientIp(req)
+ const isLocal = ip === '127.0.0.1' || ip === '::1' || ip === '::ffff:127.0.0.1'
+ if (typeof vid === 'string' && vid.trim().length > 0 && (process.env.NODE_ENV === 'development' || isLocal)) {
+ return 'vid:' + vid.trim()
+ }
+ return ip
+}
+
router.post('/visit', (req, res) => {
try {
- const ip = getClientIp(req)
+ const visitKey = getVisitKey(req)
db.prepare(
"INSERT OR REPLACE INTO visits (ip, last_seen) VALUES (?, datetime('now'))"
- ).run(ip)
+ ).run(visitKey)
db.prepare(
'INSERT INTO visitor_count (id, total) VALUES (1, 1) ON CONFLICT(id) DO UPDATE SET total = total + 1'
).run()
@@ -177,4 +187,160 @@ router.get('/events', (req, res) => {
}
})
+// ---------- 手动修正看板数据(编辑页用) ----------
+function broadcastAfterEdit(req) {
+ try {
+ const broadcast = req.app?.get?.('broadcastSituation')
+ if (typeof broadcast === 'function') broadcast()
+ } catch (_) {}
+}
+
+/** GET 原始可编辑数据:战损、据点、事件脉络、军力概要 */
+router.get('/edit/raw', (req, res) => {
+ try {
+ const lossesUs = db.prepare('SELECT * FROM combat_losses WHERE side = ?').get('us')
+ const lossesIr = db.prepare('SELECT * FROM combat_losses WHERE side = ?').get('iran')
+ const locUs = db.prepare('SELECT id, side, name, lat, lng, type, region, status, damage_level FROM key_location WHERE side = ?').all('us')
+ const locIr = db.prepare('SELECT id, side, name, lat, lng, type, region, status, damage_level FROM key_location WHERE side = ?').all('iran')
+ const updates = db.prepare('SELECT id, timestamp, category, summary, severity FROM situation_update ORDER BY timestamp DESC LIMIT 80').all()
+ const summaryUs = db.prepare('SELECT * FROM force_summary WHERE side = ?').get('us')
+ const summaryIr = db.prepare('SELECT * FROM force_summary WHERE side = ?').get('iran')
+ res.json({
+ combatLosses: { us: lossesUs || null, iran: lossesIr || null },
+ keyLocations: { us: locUs || [], iran: locIr || [] },
+ situationUpdates: updates || [],
+ forceSummary: { us: summaryUs || null, iran: summaryIr || null },
+ })
+ } catch (err) {
+ console.error(err)
+ res.status(500).json({ error: err.message })
+ }
+})
+
+/** PUT 更新战损(美/伊) */
+router.put('/edit/combat-losses', (req, res) => {
+ try {
+ const side = req.body?.side
+ if (side !== 'us' && side !== 'iran') {
+ return res.status(400).json({ error: 'side must be us or iran' })
+ }
+ const row = db.prepare('SELECT * FROM combat_losses WHERE side = ?').get(side)
+ if (!row) return res.status(404).json({ error: 'combat_losses row not found' })
+ const cols = ['bases_destroyed', 'bases_damaged', 'personnel_killed', 'personnel_wounded',
+ 'civilian_killed', 'civilian_wounded', 'aircraft', 'warships', 'armor', 'vehicles',
+ 'drones', 'missiles', 'helicopters', 'submarines', 'tanks', 'carriers', 'civilian_ships', 'airport_port']
+ const updates = []
+ const values = []
+ for (const c of cols) {
+ if (req.body[c] !== undefined) {
+ updates.push(`${c} = ?`)
+ values.push(Number(req.body[c]) || 0)
+ }
+ }
+ if (updates.length === 0) return res.status(400).json({ error: 'no fields to update' })
+ values.push(side)
+ db.prepare(`UPDATE combat_losses SET ${updates.join(', ')} WHERE side = ?`).run(...values)
+ db.prepare("INSERT OR REPLACE INTO situation (id, data, updated_at) VALUES (1, '{}', ?)").run(new Date().toISOString())
+ broadcastAfterEdit(req)
+ res.json({ ok: true })
+ } catch (err) {
+ console.error(err)
+ res.status(500).json({ error: err.message })
+ }
+})
+
+/** PATCH 更新单个据点 */
+router.patch('/edit/key-location/:id', (req, res) => {
+ try {
+ const id = parseInt(req.params.id, 10)
+ if (!Number.isFinite(id)) return res.status(400).json({ error: 'invalid id' })
+ const row = db.prepare('SELECT id FROM key_location WHERE id = ?').get(id)
+ if (!row) return res.status(404).json({ error: 'key_location not found' })
+ const allowed = ['name', 'lat', 'lng', 'type', 'region', 'status', 'damage_level']
+ const updates = []
+ const values = []
+ for (const k of allowed) {
+ if (req.body[k] !== undefined) {
+ if (k === 'status' && !['operational', 'damaged', 'attacked'].includes(req.body[k])) continue
+ updates.push(`${k} = ?`)
+ values.push(k === 'lat' || k === 'lng' ? Number(req.body[k]) : req.body[k])
+ }
+ }
+ if (updates.length === 0) return res.status(400).json({ error: 'no fields to update' })
+ values.push(id)
+ db.prepare(`UPDATE key_location SET ${updates.join(', ')} WHERE id = ?`).run(...values)
+ db.prepare("INSERT OR REPLACE INTO situation (id, data, updated_at) VALUES (1, '{}', ?)").run(new Date().toISOString())
+ broadcastAfterEdit(req)
+ res.json({ ok: true })
+ } catch (err) {
+ console.error(err)
+ res.status(500).json({ error: err.message })
+ }
+})
+
+/** POST 新增事件脉络 */
+router.post('/edit/situation-update', (req, res) => {
+ try {
+ const id = (req.body?.id || '').toString().trim() || `man_${Date.now()}_${Math.random().toString(36).slice(2, 9)}`
+ const timestamp = (req.body?.timestamp || new Date().toISOString()).toString().trim()
+ const category = (req.body?.category || 'other').toString().toLowerCase()
+ const summary = (req.body?.summary || '').toString().trim().slice(0, 500)
+ const severity = (req.body?.severity || 'medium').toString().toLowerCase()
+ if (!summary) return res.status(400).json({ error: 'summary required' })
+ const validCat = ['deployment', 'alert', 'intel', 'diplomatic', 'other'].includes(category) ? category : 'other'
+ const validSev = ['low', 'medium', 'high', 'critical'].includes(severity) ? severity : 'medium'
+ db.prepare('INSERT OR REPLACE INTO situation_update (id, timestamp, category, summary, severity) VALUES (?, ?, ?, ?, ?)').run(id, timestamp, validCat, summary, validSev)
+ db.prepare("INSERT OR REPLACE INTO situation (id, data, updated_at) VALUES (1, '{}', ?)").run(new Date().toISOString())
+ broadcastAfterEdit(req)
+ res.json({ ok: true, id })
+ } catch (err) {
+ console.error(err)
+ res.status(500).json({ error: err.message })
+ }
+})
+
+/** DELETE 删除一条事件脉络 */
+router.delete('/edit/situation-update/:id', (req, res) => {
+ try {
+ const id = (req.params.id || '').toString().trim()
+ if (!id) return res.status(400).json({ error: 'id required' })
+ const r = db.prepare('DELETE FROM situation_update WHERE id = ?').run(id)
+ if (r.changes === 0) return res.status(404).json({ error: 'not found' })
+ db.prepare("INSERT OR REPLACE INTO situation (id, data, updated_at) VALUES (1, '{}', ?)").run(new Date().toISOString())
+ broadcastAfterEdit(req)
+ res.json({ ok: true })
+ } catch (err) {
+ console.error(err)
+ res.status(500).json({ error: err.message })
+ }
+})
+
+/** PUT 更新军力概要(美/伊) */
+router.put('/edit/force-summary', (req, res) => {
+ try {
+ const side = req.body?.side
+ if (side !== 'us' && side !== 'iran') {
+ return res.status(400).json({ error: 'side must be us or iran' })
+ }
+ const cols = ['total_assets', 'personnel', 'naval_ships', 'aircraft', 'ground_units', 'uav', 'missile_consumed', 'missile_stock']
+ const updates = []
+ const values = []
+ for (const c of cols) {
+ if (req.body[c] !== undefined) {
+ updates.push(`${c} = ?`)
+ values.push(Number(req.body[c]) || 0)
+ }
+ }
+ if (updates.length === 0) return res.status(400).json({ error: 'no fields to update' })
+ values.push(side)
+ db.prepare(`UPDATE force_summary SET ${updates.join(', ')} WHERE side = ?`).run(...values)
+ db.prepare("INSERT OR REPLACE INTO situation (id, data, updated_at) VALUES (1, '{}', ?)").run(new Date().toISOString())
+ broadcastAfterEdit(req)
+ res.json({ ok: true })
+ } catch (err) {
+ console.error(err)
+ res.status(500).json({ error: err.message })
+ }
+})
+
module.exports = router
diff --git a/src/App.tsx b/src/App.tsx
index 2ea2ed7..690e087 100644
--- a/src/App.tsx
+++ b/src/App.tsx
@@ -1,6 +1,7 @@
import { Routes, Route } from 'react-router-dom'
import { Dashboard } from '@/pages/Dashboard'
import { DbDashboard } from '@/pages/DbDashboard'
+import { EditDashboard } from '@/pages/EditDashboard'
function App() {
return (
@@ -11,6 +12,7 @@ function App() {