Commit X4: Auto inject Online Research summary into plan context

This commit implements fully automatic injection of online research results into the LLM prompt without user clicks.

## Backend

### Environment Variables
- Added `PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1` (default: 0) to enable automatic injection of online research results into subsequent `proposeActions` calls.
- Added `is_online_auto_use_as_context()` helper function in `online_research/mod.rs`.

### Command Changes
- **`propose_actions` command**: Added `online_fallback_reason: Option<String>` parameter to track the error code that triggered online fallback.
- **`llm_planner::plan` function**: Added `online_fallback_reason: Option<&str>` parameter for tracing.
- **Trace Enhancements**: Added `online_fallback_reason` field to trace when `online_fallback_executed` is true.

### Module Exports
- Made `extract_error_code_prefix` public in `online_research/fallback.rs` for frontend use.

## Frontend

### Project Settings
- Added `onlineAutoUseAsContext` state (persisted in `localStorage` as `papa_yu_online_auto_use_as_context`).
- Initialized from localStorage or defaults to `false`.
- Auto-saved to localStorage on change.

### Auto-Chain Flow
- When `plan.ok === false` and `plan.online_fallback_suggested` is present:
  - If `onlineAutoUseAsContext === true` and not already attempted for this goal (cycle protection via `lastGoalWithOnlineFallbackRef`):
    - Automatically calls `researchAnswer(query)`.
    - Truncates result to `8000` chars and `10` sources (frontend-side limits).
    - Immediately calls `proposeActions` again with:
      - `online_context_md`
      - `online_context_sources`
      - `online_fallback_executed: true`
      - `online_fallback_reason: error_code`
      - `online_fallback_attempted: true`
    - Displays the new plan/error without requiring "Use as context" button click.
  - If `onlineAutoUseAsContext === false` or already attempted:
    - Falls back to manual mode (shows online research block with "Use as context (once)" button).

### Cycle Protection
- `lastGoalWithOnlineFallbackRef` tracks the last goal that triggered online fallback.
- If the same goal triggers fallback again, auto-chain is skipped to prevent infinite loops.
- Maximum 1 auto-chain per user query.

### UI Enhancements
- **Online Research Block**:
  - When `onlineAutoUseAsContext === true`: displays "Auto-used ✓" badge.
  - Hides "Use as context (once)" button when auto-use is enabled.
  - Adds "Disable auto-use" button (red) to disable auto-use for the current project.
  - When disabled, shows system message: "Auto-use отключён для текущего проекта."

### API Updates
- **`proposeActions` in `tauri.ts`**: Added `onlineFallbackReason?: string | null` parameter.

## Tests

- **`online_context_auto_test.rs`**: Added unit tests for:
  - `test_is_online_auto_use_disabled_by_default`
  - `test_is_online_auto_use_enabled_when_set`
  - `test_extract_error_code_prefix_timeout`
  - `test_extract_error_code_prefix_schema`
  - `test_extract_error_code_prefix_empty_when_no_prefix`

All tests pass.

## Documentation

### README.md
- Added "Auto-use (X4)" subsection under "Online Research":
  - Describes `PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1` env var (default: 0).
  - Explains cycle protection: maximum 1 auto-chain per goal.
  - Documents UI behavior: "Auto-used ✓" badge and "Disable auto-use" button.

## Behavior Summary

**Without auto-use (default):**
1. `proposeActions` → error + `online_fallback_suggested`
2. UI calls `researchAnswer`
3. UI displays online research block with "Use as context (once)" button
4. User clicks button → sets `onlineContextPending` → next `proposeActions` includes context

**With auto-use enabled (`PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1`):**
1. `proposeActions` → error + `online_fallback_suggested`
2. UI calls `researchAnswer` automatically
3. UI displays online research block with "Auto-used ✓" badge
4. UI immediately calls `proposeActions` again with online context → displays new plan
5. If still fails → no retry (cycle protection)

## Build Status

-  Backend: `cargo build --lib` (2 warnings about unused code for future features)
-  Frontend: `npm run build`
-  Tests: `cargo test online_context_auto_test --lib` (5 passed)

Co-authored-by: Cursor <cursoragent@cursor.com>
This commit is contained in:
Yuriy 2026-01-31 14:39:40 +03:00
parent a88c34aa15
commit 764003fc09
45 changed files with 4363 additions and 118 deletions

View File

@ -1,4 +1,4 @@
name: Protocol v1 check name: Protocol check (v1 + v2)
on: on:
push: push:
@ -24,5 +24,5 @@ jobs:
target target
key: cargo-${{ runner.os }}-${{ hashFiles('**/Cargo.lock') }} key: cargo-${{ runner.os }}-${{ hashFiles('**/Cargo.lock') }}
- name: golden_traces_v1_validate - name: golden_traces (v1 + v2)
run: cd src-tauri && cargo test golden_traces_v1_validate --no-fail-fast run: cd src-tauri && cargo test golden_traces --no-fail-fast

View File

@ -51,6 +51,11 @@
- **CI:** `.github/workflows/protocol-check.yml` — golden_traces_v1_validate на push/PR. - **CI:** `.github/workflows/protocol-check.yml` — golden_traces_v1_validate на push/PR.
- **Политика golden traces:** в docs/golden_traces/README.md — когда/как обновлять, при смене schema_hash. - **Политика golden traces:** в docs/golden_traces/README.md — когда/как обновлять, при смене schema_hash.
- **Protocol v2 schema (plumbing):** `llm_response_schema_v2.json` — object-only, PATCH_FILE, base_sha256. `PAPAYU_PROTOCOL_VERSION=1|2` (default 1). schema_version и schema_hash динамические в trace. - **Protocol v2 schema (plumbing):** `llm_response_schema_v2.json` — object-only, PATCH_FILE, base_sha256. `PAPAYU_PROTOCOL_VERSION=1|2` (default 1). schema_version и schema_hash динамические в trace.
- **V2 system prompt:** `FIX_PLAN_SYSTEM_PROMPT_V2` при protocol=2 и fix-plan/fixit.
- **Контекст v2:** FILE-блоки с sha256: `FILE[path] (sha256=...):` для base_sha256 в PATCH_FILE.
- **PATCH_FILE engine:** diffy, sha256_hex, looks_like_unified_diff, apply_unified_diff. ActionKind::PatchFile, apply_patch_file_tx, preview. ERR_PATCH_NOT_UNIFIED, ERR_BASE_MISMATCH, ERR_PATCH_APPLY_FAILED.
- **Коммит 5:** v2 prompt UPDATE_FILE запрещён для существующих. ERR_V2_UPDATE_EXISTING_FORBIDDEN (plan + apply). bytes_before/bytes_after в DiffItem. ERR_NON_UTF8_FILE docs.
- **Golden traces v2:** docs/golden_traces/v2/ (5 fixtures), golden_traces_v2_validate. CI: v1 + v2.
### Изменено ### Изменено

View File

@ -18,7 +18,7 @@ golden-latest:
cd src-tauri && cargo run --bin trace_to_golden -- "../$$LATEST" cd src-tauri && cargo run --bin trace_to_golden -- "../$$LATEST"
test-protocol: test-protocol:
cd src-tauri && cargo test golden_traces_v1_validate cd src-tauri && cargo test golden_traces
test-all: test-all:
cd src-tauri && cargo test cd src-tauri && cargo test

View File

@ -93,6 +93,29 @@ npm run tauri dev
Если `PAPAYU_LLM_API_URL` не задан или пуст, используется встроенная эвристика (README, .gitignore, LICENSE, .env.example по правилам). Если `PAPAYU_LLM_API_URL` не задан или пуст, используется встроенная эвристика (README, .gitignore, LICENSE, .env.example по правилам).
### Online Research (опционально)
Команда `research_answer_cmd`: поиск через Tavily → fetch страниц (SSRF-safe) → LLM summarize с источниками. Вызов через `researchAnswer(query)` на фронте.
**Env:**
- **`PAPAYU_ONLINE_RESEARCH=1`** — включить режим (по умолчанию выключен)
- **`PAPAYU_TAVILY_API_KEY`** — API-ключ Tavily (tavily.com)
- **`PAPAYU_ONLINE_MODEL`** — модель для summarize (по умолчанию из PAPAYU_LLM_MODEL)
- **`PAPAYU_ONLINE_MAX_SOURCES`** — макс. результатов поиска (default 5)
- **`PAPAYU_ONLINE_MAX_PAGES`** — макс. страниц для fetch (default 4)
- **`PAPAYU_ONLINE_PAGE_MAX_BYTES`** — лимит размера страницы (default 200000)
- **`PAPAYU_ONLINE_TIMEOUT_SEC`** — таймаут fetch (default 20)
**Use as context:** после online research кнопка «Use as context (once)» добавляет ответ в следующий PLAN/APPLY. Лимиты:
- **`PAPAYU_ONLINE_CONTEXT_MAX_CHARS`** — макс. символов online summary (default 8000)
- **`PAPAYU_ONLINE_CONTEXT_MAX_SOURCES`** — макс. источников (default 10)
- Online summary режется первым при превышении `PAPAYU_CONTEXT_MAX_TOTAL_CHARS`.
**Auto-use (X4):**
- **`PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1`** — если включено, online research результат автоматически используется как контекст для повторного `proposeActions` без участия пользователя (default 0).
- Защита от циклов: максимум 1 auto-chain на один запрос (goal).
- UI: при auto-use показывается метка "Auto-used ✓"; кнопка "Disable auto-use" отключает для текущего проекта (сохраняется в localStorage).
### Тестирование ### Тестирование
- **Юнит-тесты (Rust)** — тесты для `detect_project_type`, `get_project_limits`, `is_protected_file`, `is_text_allowed` (см. `src-tauri/src/commands/get_project_profile.rs` и `apply_actions_tx.rs`). Запуск: `cd src-tauri && cargo test`. - **Юнит-тесты (Rust)** — тесты для `detect_project_type`, `get_project_limits`, `is_protected_file`, `is_text_allowed` (см. `src-tauri/src/commands/get_project_profile.rs` и `apply_actions_tx.rs`). Запуск: `cd src-tauri && cargo test`.

View File

@ -10,6 +10,8 @@
- **schema_hash:** sha256 от `llm_response_schema.json` (в trace) - **schema_hash:** sha256 от `llm_response_schema.json` (в trace)
- При изменении контракта — увеличивать schema_version; v2 — новый документ. - При изменении контракта — увеличивать schema_version; v2 — новый документ.
**Default protocol:** v2; Apply может fallback на v1 при специфичных кодах ошибок (см. PROTOCOL_V2_PLAN.md).
--- ---
## Гарантии ## Гарантии

View File

@ -4,7 +4,21 @@
--- ---
## 3.1. Главная цель v2 ## Diff v1 → v2 (схема)
| v1 | v2 |
|----|-----|
| `oneOf` (root array \| object) | всегда **объект** |
| `proposed_changes.actions` | только `actions` в корне |
| `UPDATE_FILE` с `content` | `PATCH_FILE` с `patch` + `base_sha256` (по умолчанию) |
| 5 kinds | 6 kinds (+ PATCH_FILE) |
| `content` для CREATE/UPDATE | `content` для CREATE/UPDATE; `patch`+`base_sha256` для PATCH |
Добавлено: `patch`, `base_sha256` (hex 64), взаимоисключающие правила (content vs patch/base).
---
## Главная цель v2
Снизить риск/стоимость «UPDATE_FILE целиком» и улучшить точность правок: Снизить риск/стоимость «UPDATE_FILE целиком» и улучшить точность правок:
- частичные патчи, - частичные патчи,
@ -12,7 +26,7 @@
--- ---
## 3.2. Минимальный набор изменений ## Минимальный набор изменений
### A) Новый action kind: `PATCH_FILE` ### A) Новый action kind: `PATCH_FILE`
@ -54,21 +68,217 @@
--- ---
## 3.3. Совместимость v1/v2 ## Совместимость v1/v2
- `schema_version=1` → нынешний формат (UPDATE_FILE, CREATE_FILE, …). - `schema_version=1` → нынешний формат (UPDATE_FILE, CREATE_FILE, …).
- `schema_version=2` → допускает `PATCH_FILE` / `REPLACE_RANGE` и расширенные поля. - `schema_version=2` → допускает `PATCH_FILE` / `REPLACE_RANGE` и расширенные поля.
В коде: В коде:
- Компилировать обе схемы: `llm_response_schema.json` (v1), `llm_response_schema_v2.json`. - Компилировать обе схемы: `llm_response_schema.json` (v1), `llm_response_schema_v2.json`.
- Выбор активной по env: `PAPAYU_PROTOCOL_VERSION=1|2` (default 1). - Выбор активной по env: `PAPAYU_PROTOCOL_DEFAULT` или `PAPAYU_PROTOCOL_VERSION` (default 2).
- Валидация/парсер: сначала проверить schema v2 (если включена), иначе v1. - Валидация/парсер: сначала проверить schema v2 (если включена), иначе v1.
--- ---
## 3.4. Порядок внедрения v2 без риска ## Порядок внедрения v2 без риска
1. Добавить v2 schema + валидаторы + apply engine, **не включая по умолчанию**. 1. Добавить v2 schema + валидаторы + apply engine.
2. Добавить «LLM prompt v2» (рекомендовать PATCH_FILE вместо UPDATE_FILE). 2. Добавить «LLM prompt v2» (рекомендовать PATCH_FILE вместо UPDATE_FILE).
3. Прогнать на своих проектах и собрать golden traces v2. 3. Golden traces v2.
4. Когда стабильно — сделать v2 дефолтом, сохранив совместимость v1. 4. **v2 default** с автоматическим fallback на v1 (реализовано).
---
## v2 default + fallback (реализовано)
- **PAPAYU_PROTOCOL_DEFAULT** (или PAPAYU_PROTOCOL_VERSION): default 2.
- **PAPAYU_PROTOCOL_FALLBACK_TO_V1**: default 1 (включён). При ошибках v2 (ERR_PATCH_APPLY_FAILED, ERR_NON_UTF8_FILE, ERR_V2_UPDATE_EXISTING_FORBIDDEN) — автоматический retry с v1.
- Fallback только для APPLY (plan остаётся по выбранному протоколу).
- Trace: `protocol_default`, `protocol_attempts`, `protocol_fallback_reason`.
- Лог: `[trace] PROTOCOL_FALLBACK from=v2 to=v1 reason=ERR_...`
**Compatibility:** Default protocol — v2. Apply может fallback на v1 при специфичных кодах ошибок (ERR_PATCH_APPLY_FAILED, ERR_NON_UTF8_FILE, ERR_V2_UPDATE_EXISTING_FORBIDDEN).
### Метрики для анализа (grep по trace / логам)
- `fallback_rate = fallback_count / apply_count`
- `fallback_rate_excluding_non_utf8` — исключить ERR_NON_UTF8_FILE (не провал v2, ограничение данных)
- Распределение причин fallback:
- ERR_PATCH_APPLY_FAILED
- ERR_NON_UTF8_FILE
- ERR_V2_UPDATE_EXISTING_FORBIDDEN
Trace-поля: `protocol_repair_attempt` (0|1), `protocol_fallback_stage` (apply|preview|validate|schema).
Цель: понять, что мешает v2 стать единственным.
### Graduation criteria (когда отключать fallback / v2-only)
За последние 100 APPLY:
- `fallback_rate < 1%`
- **ERR_PATCH_APPLY_FAILED** < 1% и чаще лечится repair, чем fallback
- **ERR_V2_UPDATE_EXISTING_FORBIDDEN** стремится к 0 (после tightening/repair)
- **ERR_NON_UTF8_FILE** не считается «провалом v2» (ограничение формата; можно отдельно)
- Для честной оценки v2 использовать `fallback_rate_excluding_non_utf8`
Тогда: `PAPAYU_PROTOCOL_FALLBACK_TO_V1=0` и, при необходимости, v2-only.
**protocol_fallback_stage** (где произошло падение): `apply` (сейчас), `preview` (если preview patch не применился), `validate` (семантика), `schema` (валидация JSON) — добавить при расширении.
### Fallback: однократность и repair-first
- **Однократность:** в одном APPLY нельзя зациклиться; если v1 fallback тоже не помог — Err.
- **Repair-first:** для ERR_PATCH_APPLY_FAILED и ERR_V2_UPDATE_EXISTING_FORBIDDEN — сначала repair v2, потом fallback. Для ERR_NON_UTF8_FILE — fallback сразу.
- **Trace:** `protocol_repair_attempt` (0|1), `protocol_fallback_attempted`, `protocol_fallback_stage` (apply|preview|validate|schema).
### Еженедельный отчёт (grep/jq)
Пример пайплайна для анализа трасс (trace JSON в одной строке на файл):
```bash
# APPLY count
grep -l '"event":"LLM_PLAN_OK"' traces/*.json 2>/dev/null | wc -l
# fallback_count (protocol_fallback_attempted)
grep '"protocol_fallback_attempted":true' traces/*.json 2>/dev/null | wc -l
# breakdown по причинам
grep -oh '"protocol_fallback_reason":"[^"]*"' traces/*.json 2>/dev/null | sort | uniq -c
# repair_success (protocol_repair_attempt=0 и нет fallback в следующей трассе) — требует связки
jq -s '[.[] | select(.event=="LLM_PLAN_OK" and .protocol_repair_attempt==0)] | length' traces/*.json 2>/dev/null
# top paths по repair_injected_sha256
grep -oh '"repair_injected_paths":\[[^]]*\]' traces/*.json 2>/dev/null | sort | uniq -c | sort -rn | head -20
```
**System prompt v2** (`FIX_PLAN_SYSTEM_PROMPT_V2`): жёсткие правила PATCH_FILE, base_sha256, object-only, NO_CHANGES. Включается при `PAPAYU_PROTOCOL_VERSION=2` и режиме fix-plan/fixit.
**Формат FILE-блока v2:**
```
FILE[path/to/file.py] (sha256=7f3f2a0c9f8b1a0c9b4c0f9e3d8a4b2d8c9e7f1a0b3c4d5e6f7a8b9c0d1e2f3a):
<content>
```
sha256 — от полного содержимого файла; **не обрезается** при context-diet. Модель копирует его в `base_sha256` для PATCH_FILE.
### Prompt rules (оптимизация v2)
- Патч должен быть **минимальным** — меняй только нужные строки, не форматируй файл целиком.
- Каждый `@@` hunk должен иметь 13 строки контекста до/после изменения.
- Не делай массовых форматирований и EOL-изменений.
- Если файл не UTF-8 или слишком большой/генерируемый — верни PLAN (actions=[]) и запроси альтернативу.
**Авто-эскалация при ERR_PATCH_APPLY_FAILED** (опционально): при repair retry добавить «Увеличь контекст hunks до 3 строк, не меняй соседние блоки.»
---
## PATCH_FILE engine (реализовано)
- **Модуль `patch`:** sha256_hex, is_valid_sha256_hex, looks_like_unified_diff, apply_unified_diff_to_text (diffy)
- **tx::apply_patch_file_impl:** проверка base_sha256 → применение diff → EOL нормализация → запись
- **Preview:** preview_patch_file проверяет base_sha256 и применимость, возвращает patch в DiffItem
- **Коды ошибок:** ERR_PATCH_NOT_UNIFIED, ERR_BASE_MISMATCH, ERR_PATCH_APPLY_FAILED, ERR_BASE_SHA256_INVALID, ERR_NON_UTF8_FILE
- **Repair hints:** REPAIR_ERR_* для repair flow / UI
---
## ERR_NON_UTF8_FILE и ERR_V2_UPDATE_EXISTING_FORBIDDEN
**ERR_NON_UTF8_FILE:** PATCH_FILE работает только по UTF-8 тексту. Для бинарных/не-UTF8 файлов — только CREATE_FILE (если явно нужно), иначе отказ/PLAN. Сообщение для UI: «Файл не UTF-8. PATCH_FILE недоступен. Перейди в PLAN и выбери другой подход.»
**ERR_V2_UPDATE_EXISTING_FORBIDDEN:** В v2 UPDATE_FILE запрещён для существующих файлов. Семантический гейт: если UPDATE_FILE и файл существует → ошибка. Repair: «Сгенерируй PATCH_FILE вместо UPDATE_FILE».
---
## Рекомендации для v2
- В v2 модификация существующих файлов **по умолчанию** через `PATCH_FILE`.
- `base_sha256` обязателен для `PATCH_FILE` и проверяется приложением.
- При `ERR_BASE_MISMATCH` требуется новый PLAN (файл изменился).
- В APPLY отсутствие изменений оформляется через `NO_CHANGES:` и `actions: []`.
---
## Примеры v2 ответов
### PLAN (v2): план без изменений
```json
{
"actions": [],
"summary": "Диагноз: падает из-за неверной обработки None.\nПлан:\n1) Прочитать src/parser.py вокруг функции parse().\n2) Добавить проверку на None и поправить тест.\nПроверка: pytest -q",
"context_requests": [
{ "type": "read_file", "path": "src/parser.py", "start_line": 1, "end_line": 260 },
{ "type": "read_file", "path": "tests/test_parser.py", "start_line": 1, "end_line": 200 }
],
"memory_patch": {}
}
```
### APPLY (v2): PATCH_FILE на существующий файл
`base_sha256` должен совпасть с хэшем текущего файла.
```json
{
"actions": [
{
"kind": "PATCH_FILE",
"path": "src/parser.py",
"base_sha256": "7f3f2a0c9f8b1a0c9b4c0f9e3d8a4b2d8c9e7f1a0b3c4d5e6f7a8b9c0d1e2f3a",
"patch": "--- a/src/parser.py\n+++ b/src/parser.py\n@@ -41,6 +41,10 @@ def parse(value):\n- return value.strip()\n+ if value is None:\n+ return \"\"\n+ return value.strip()\n"
},
{
"kind": "PATCH_FILE",
"path": "tests/test_parser.py",
"base_sha256": "0a1b2c3d4e5f60718293a4b5c6d7e8f90123456789abcdef0123456789abcdef0",
"patch": "--- a/tests/test_parser.py\n+++ b/tests/test_parser.py\n@@ -10,7 +10,7 @@ def test_parse_none():\n- assert parse(None) is None\n+ assert parse(None) == \"\"\n"
}
],
"summary": "Исправлено: parse(None) теперь возвращает пустую строку. Обновлён тест.\nПроверка: pytest -q",
"context_requests": [],
"memory_patch": {}
}
```
### APPLY (v2): создание файлов (как в v1)
```json
{
"actions": [
{ "kind": "CREATE_DIR", "path": "src" },
{
"kind": "CREATE_FILE",
"path": "README.md",
"content": "# My Project\n\nRun: `make run`\n"
}
],
"summary": "Созданы папка src и README.md.",
"context_requests": [],
"memory_patch": {}
}
```
### APPLY (v2): NO_CHANGES
```json
{
"actions": [],
"summary": "NO_CHANGES: Код уже соответствует требованиям, правки не нужны.\nПроверка: pytest -q",
"context_requests": [],
"memory_patch": {}
}
```
---
## Ошибки движка v2
| Код | Когда | Действие |
|-----|-------|----------|
| `ERR_BASE_MISMATCH` | Файл изменился между PLAN и APPLY, sha256 не совпал | Вернуться в PLAN, перечитать файл, обновить base_sha256 |
| `ERR_PATCH_APPLY_FAILED` | Hunks не применились (контекст не совпал) | Вернуться в PLAN, запросить более точный контекст, перегенерировать патч |
| `ERR_PATCH_NOT_UNIFIED` | LLM прислал не unified diff | Repair-ретрай с требованием unified diff |

59
docs/PROTOCOL_V3_PLAN.md Normal file
View File

@ -0,0 +1,59 @@
# План Protocol v3
План развития протокола — без внедрения. v2 решает «перезапись файла» через PATCH_FILE, но патчи всё ещё бывают хрупкими.
---
## Вариант v3-A (рекомендуемый): EDIT_FILE с операциями
Новый action:
```json
{
"kind": "EDIT_FILE",
"path": "src/foo.py",
"base_sha256": "...",
"edits": [
{
"op": "replace",
"anchor": "def parse(",
"before": "return value.strip()",
"after": "if value is None:\n return \"\"\nreturn value.strip()"
}
]
}
```
**Плюсы:**
- Устойчивее к line drift (якорь по содержимому, не по номерам строк)
- Проще валидировать «что именно поменялось»
- Меньше риска ERR_PATCH_APPLY_FAILED
**Минусы:**
- Нужен свой «якорный» редактор
- Якорь должен быть уникальным в файле
**MVP для v3:**
- Оставить PATCH_FILE как fallback
- Добавить EDIT_FILE только для текстовых файлов
- Engine: «найди anchor → проверь before → замени на after»
- base_sha256 остаётся обязательным
---
## Вариант v3-B: AST-level edits (язык-специфично)
Для Python/TS можно делать по AST (insert/delete/replace узлов). Плюсы: максимальная точность. Минусы: значительно больше работы, сложнее поддерживать, нужно знать язык.
---
## Совместимость v1/v2/v3
- v1: UPDATE_FILE, CREATE_FILE, …
- v2: + PATCH_FILE, base_sha256
- v3: + EDIT_FILE (якорные операции), PATCH_FILE как fallback
Выбор активного протокола по env. v3 совместим с v2 (EDIT_FILE — расширение).

View File

@ -12,6 +12,12 @@ docs/golden_traces/
001_fix_bug_plan.json 001_fix_bug_plan.json
002_fix_bug_apply.json 002_fix_bug_apply.json
... ...
v2/ # Protocol v2 fixtures (PATCH_FILE, base_sha256)
001_fix_bug_plan.json
002_fix_bug_apply_patch.json
003_base_mismatch_block.json
004_patch_apply_failed_block.json
005_no_changes_apply.json
``` ```
## Формат fixture (без секретов) ## Формат fixture (без секретов)
@ -37,7 +43,7 @@ cargo run --bin trace_to_golden -- <path/to/trace.json> [output_path]
## Регрессионный тест ## Регрессионный тест
```bash ```bash
cargo test golden_traces_v1_validate cargo test golden_traces_v1_validate golden_traces_v2_validate
# или # или
make test-protocol make test-protocol
npm run test-protocol npm run test-protocol

View File

@ -0,0 +1,43 @@
{
"protocol": {
"schema_version": 2,
"schema_hash": "49374413940cb32f3763ae62b3450647eb7b3be1ae50668cf6936f29512cef7b"
},
"request": {
"mode": "plan",
"input_chars": 12000,
"token_budget": 4096,
"strict_json": true,
"provider": "openai",
"model": "gpt-4o-mini"
},
"context": {
"context_stats": {
"context_files_count": 1,
"context_files_dropped_count": 0,
"context_total_chars": 1500,
"context_logs_chars": 0,
"context_truncated_files_count": 0
},
"cache_stats": {
"env_hits": 0,
"env_misses": 1,
"logs_hits": 0,
"logs_misses": 0,
"read_hits": 0,
"read_misses": 1,
"search_hits": 0,
"search_misses": 0,
"hit_rate": 0.0
}
},
"result": {
"validated_json": {
"actions": [],
"summary": "Диагноз: ошибка в main. План: PATCH_FILE для замены println! аргумента.",
"context_requests": [{"type": "read_file", "path": "src/main.rs"}]
},
"validation_outcome": "ok",
"error_code": null
}
}

View File

@ -0,0 +1,55 @@
{
"protocol": {
"schema_version": 2,
"schema_hash": "49374413940cb32f3763ae62b3450647eb7b3be1ae50668cf6936f29512cef7b"
},
"request": {
"mode": "apply",
"input_chars": 15000,
"token_budget": 4096,
"strict_json": true,
"provider": "openai",
"model": "gpt-4o-mini"
},
"context": {
"context_stats": {
"context_files_count": 2,
"context_files_dropped_count": 0,
"context_total_chars": 3600,
"context_logs_chars": 0,
"context_truncated_files_count": 0
},
"cache_stats": {
"env_hits": 0,
"env_misses": 1,
"logs_hits": 0,
"logs_misses": 0,
"read_hits": 1,
"read_misses": 0,
"search_hits": 0,
"search_misses": 0,
"hit_rate": 0.5
}
},
"result": {
"validated_json": {
"actions": [
{
"kind": "PATCH_FILE",
"path": "src/main.rs",
"patch": "--- a/src/main.rs\n+++ b/src/main.rs\n@@ -1,3 +1,3 @@\n fn main() {\n- println!(\"bug\");\n+ println!(\"fix\");\n }\n",
"base_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
},
{
"kind": "PATCH_FILE",
"path": "src/lib.rs",
"patch": "--- a/src/lib.rs\n+++ b/src/lib.rs\n@@ -1,2 +1,2 @@\n-pub fn foo() {}\n+pub fn foo() { /* fixed */ }\n",
"base_sha256": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
}
],
"summary": "Применены PATCH_FILE для main.rs и lib.rs."
},
"validation_outcome": "ok",
"error_code": null
}
}

View File

@ -0,0 +1,49 @@
{
"protocol": {
"schema_version": 2,
"schema_hash": "49374413940cb32f3763ae62b3450647eb7b3be1ae50668cf6936f29512cef7b"
},
"request": {
"mode": "apply",
"input_chars": 10000,
"token_budget": 4096,
"strict_json": true,
"provider": "openai",
"model": "gpt-4o-mini"
},
"context": {
"context_stats": {
"context_files_count": 1,
"context_files_dropped_count": 0,
"context_total_chars": 2000,
"context_logs_chars": 0,
"context_truncated_files_count": 0
},
"cache_stats": {
"env_hits": 0,
"env_misses": 1,
"logs_hits": 0,
"logs_misses": 0,
"read_hits": 1,
"read_misses": 0,
"search_hits": 0,
"search_misses": 0,
"hit_rate": 0.5
}
},
"result": {
"validated_json": {
"actions": [
{
"kind": "PATCH_FILE",
"path": "src/main.rs",
"patch": "--- a/src/main.rs\n+++ b/src/main.rs\n@@ -1,3 +1,3 @@\n fn main() {\n- println!(\"old\");\n+ println!(\"new\");\n }\n",
"base_sha256": "0000000000000000000000000000000000000000000000000000000000000000"
}
],
"summary": "Изменил main."
},
"validation_outcome": "ok",
"error_code": "ERR_BASE_MISMATCH"
}
}

View File

@ -0,0 +1,49 @@
{
"protocol": {
"schema_version": 2,
"schema_hash": "49374413940cb32f3763ae62b3450647eb7b3be1ae50668cf6936f29512cef7b"
},
"request": {
"mode": "apply",
"input_chars": 10000,
"token_budget": 4096,
"strict_json": true,
"provider": "openai",
"model": "gpt-4o-mini"
},
"context": {
"context_stats": {
"context_files_count": 1,
"context_files_dropped_count": 0,
"context_total_chars": 2000,
"context_logs_chars": 0,
"context_truncated_files_count": 0
},
"cache_stats": {
"env_hits": 0,
"env_misses": 1,
"logs_hits": 0,
"logs_misses": 0,
"read_hits": 1,
"read_misses": 0,
"search_hits": 0,
"search_misses": 0,
"hit_rate": 0.5
}
},
"result": {
"validated_json": {
"actions": [
{
"kind": "PATCH_FILE",
"path": "src/main.rs",
"patch": "--- a/src/main.rs\n+++ b/src/main.rs\n@@ -1,5 +1,5 @@\n fn main() {\n- println!(\"WRONG_CONTEXT_LINE\");\n+ println!(\"new\");\n }\n",
"base_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
}
],
"summary": "Изменил main."
},
"validation_outcome": "ok",
"error_code": "ERR_PATCH_APPLY_FAILED"
}
}

View File

@ -0,0 +1,42 @@
{
"protocol": {
"schema_version": 2,
"schema_hash": "49374413940cb32f3763ae62b3450647eb7b3be1ae50668cf6936f29512cef7b"
},
"request": {
"mode": "apply",
"input_chars": 5000,
"token_budget": 4096,
"strict_json": true,
"provider": "openai",
"model": "gpt-4o-mini"
},
"context": {
"context_stats": {
"context_files_count": 1,
"context_files_dropped_count": 0,
"context_total_chars": 1000,
"context_logs_chars": 0,
"context_truncated_files_count": 0
},
"cache_stats": {
"env_hits": 0,
"env_misses": 1,
"logs_hits": 0,
"logs_misses": 0,
"read_hits": 0,
"read_misses": 1,
"search_hits": 0,
"search_misses": 0,
"hit_rate": 0.0
}
},
"result": {
"validated_json": {
"actions": [],
"summary": "NO_CHANGES: Проверка завершена, правок не требуется."
},
"validation_outcome": "ok",
"error_code": null
}
}

View File

@ -9,7 +9,7 @@
"tauri": "tauri", "tauri": "tauri",
"icons:export": "node scripts/export-icon.js", "icons:export": "node scripts/export-icon.js",
"golden": "cd src-tauri && cargo run --bin trace_to_golden --", "golden": "cd src-tauri && cargo run --bin trace_to_golden --",
"test-protocol": "cd src-tauri && cargo test golden_traces_v1_validate" "test-protocol": "cd src-tauri && cargo test golden_traces"
}, },
"dependencies": { "dependencies": {
"@tauri-apps/api": "^2.0.0", "@tauri-apps/api": "^2.0.0",

View File

@ -1,6 +1,7 @@
[package] [package]
name = "papa-yu" name = "papa-yu"
version = "2.4.4" version = "2.4.4"
default-run = "papa-yu"
edition = "2021" edition = "2021"
description = "PAPA YU — анализ и исправление проектов" description = "PAPA YU — анализ и исправление проектов"
@ -23,6 +24,10 @@ chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1", features = ["v4", "serde"] } uuid = { version = "1", features = ["v4", "serde"] }
jsonschema = "0.18" jsonschema = "0.18"
sha2 = "0.10" sha2 = "0.10"
hex = "0.4"
diffy = "0.4"
url = "2"
scraper = "0.20"
[dev-dependencies] [dev-dependencies]
tempfile = "3" tempfile = "3"

View File

@ -0,0 +1,27 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"x_schema_version": 1,
"type": "object",
"additionalProperties": false,
"required": ["answer_md", "confidence", "sources"],
"properties": {
"answer_md": { "type": "string" },
"confidence": { "type": "number", "minimum": 0, "maximum": 1 },
"sources": {
"type": "array",
"maxItems": 10,
"items": {
"type": "object",
"additionalProperties": false,
"required": ["url", "title"],
"properties": {
"url": { "type": "string" },
"title": { "type": "string" },
"published_at": { "type": "string" },
"snippet": { "type": "string" }
}
}
},
"notes": { "type": "string" }
}
}

View File

@ -0,0 +1,73 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"x_schema_version": 1,
"type": "object",
"additionalProperties": false,
"required": ["title", "period", "summary_md", "kpis", "findings", "recommendations", "operator_actions"],
"properties": {
"title": { "type": "string" },
"period": {
"type": "object",
"additionalProperties": false,
"required": ["from", "to"],
"properties": {
"from": { "type": "string" },
"to": { "type": "string" }
}
},
"summary_md": { "type": "string" },
"kpis": {
"type": "object",
"additionalProperties": false,
"required": ["apply_count", "fallback_count", "fallback_rate", "fallback_rate_excluding_non_utf8", "repair_success_rate", "sha_injection_rate"],
"properties": {
"apply_count": { "type": "integer", "minimum": 0 },
"fallback_count": { "type": "integer", "minimum": 0 },
"fallback_rate": { "type": "number", "minimum": 0, "maximum": 1 },
"fallback_rate_excluding_non_utf8": { "type": "number", "minimum": 0, "maximum": 1 },
"repair_success_rate": { "type": "number", "minimum": 0, "maximum": 1 },
"sha_injection_rate": { "type": "number", "minimum": 0, "maximum": 1 }
}
},
"findings": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"required": ["severity", "title", "evidence"],
"properties": {
"severity": { "type": "string", "enum": ["info", "warning", "critical"] },
"title": { "type": "string" },
"evidence": { "type": "string" }
}
}
},
"recommendations": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"required": ["priority", "title", "rationale", "expected_impact"],
"properties": {
"priority": { "type": "string", "enum": ["p0", "p1", "p2"] },
"title": { "type": "string" },
"rationale": { "type": "string" },
"expected_impact": { "type": "string" }
}
}
},
"operator_actions": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"required": ["title", "steps", "time_estimate_minutes"],
"properties": {
"title": { "type": "string" },
"steps": { "type": "array", "items": { "type": "string" } },
"time_estimate_minutes": { "type": "integer", "minimum": 1 }
}
}
}
}
}

View File

@ -71,6 +71,8 @@ fn build_plan(
content: Some( content: Some(
"# Project\n\n## Описание\n\nКратко опишите проект.\n\n## Запуск\n\n- dev: ...\n- build: ...\n\n## Структура\n\n- src/\n- tests/\n".into(), "# Project\n\n## Описание\n\nКратко опишите проект.\n\n## Запуск\n\n- dev: ...\n- build: ...\n\n## Структура\n\n- src/\n- tests/\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
plan_parts.push("README.md".into()); plan_parts.push("README.md".into());
} }
@ -82,6 +84,8 @@ fn build_plan(
content: Some( content: Some(
"node_modules/\ndist/\nbuild/\n.next/\ncoverage/\n.env\n.env.*\n.DS_Store\n.target/\n".into(), "node_modules/\ndist/\nbuild/\n.next/\ncoverage/\n.env\n.env.*\n.DS_Store\n.target/\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
plan_parts.push(".gitignore".into()); plan_parts.push(".gitignore".into());
} }
@ -91,6 +95,8 @@ fn build_plan(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "tests/README.md".to_string(), path: "tests/README.md".to_string(),
content: Some("# Тесты\n\nДобавьте unit- и интеграционные тесты.\n".into()), content: Some("# Тесты\n\nДобавьте unit- и интеграционные тесты.\n".into()),
patch: None,
base_sha256: None,
}); });
plan_parts.push("tests/README.md".into()); plan_parts.push("tests/README.md".into());
} }
@ -102,6 +108,8 @@ fn build_plan(
content: Some( content: Some(
"root = true\n\n[*]\nindent_style = space\nindent_size = 2\nend_of_line = lf\n".into(), "root = true\n\n[*]\nindent_style = space\nindent_size = 2\nend_of_line = lf\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
plan_parts.push(".editorconfig".into()); plan_parts.push(".editorconfig".into());
} }
@ -226,6 +234,8 @@ pub async fn agentic_run(window: Window, payload: AgenticRunRequest) -> AgenticR
ApplyOptions { ApplyOptions {
auto_check: false, auto_check: false,
user_confirmed: true, user_confirmed: true,
protocol_version_override: None,
fallback_attempted: false,
}, },
) )
.await; .await;

View File

@ -55,6 +55,8 @@ pub fn analyze_project(paths: Vec<String>, attached_files: Option<Vec<String>>)
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: ".env.example".to_string(), path: ".env.example".to_string(),
content: Some("# Copy to .env and fill\n".to_string()), content: Some("# Copy to .env and fill\n".to_string()),
patch: None,
base_sha256: None,
}); });
} }
if has_src && !has_tests { if has_src && !has_tests {
@ -109,6 +111,8 @@ fn build_action_groups(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "README.md".into(), path: "README.md".into(),
content: Some("# Project\n\n## Overview\n\n## How to run\n\n## Tests\n\n".into()), content: Some("# Project\n\n## Overview\n\n## How to run\n\n## Tests\n\n".into()),
patch: None,
base_sha256: None,
}], }],
}); });
} }
@ -129,6 +133,8 @@ fn build_action_groups(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: ".gitignore".into(), path: ".gitignore".into(),
content: Some(content.to_string()), content: Some(content.to_string()),
patch: None,
base_sha256: None,
}], }],
}); });
} }
@ -143,11 +149,15 @@ fn build_action_groups(
kind: ActionKind::CreateDir, kind: ActionKind::CreateDir,
path: "tests".into(), path: "tests".into(),
content: None, content: None,
patch: None,
base_sha256: None,
}, },
Action { Action {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "tests/README.md".into(), path: "tests/README.md".into(),
content: Some("# Tests\n\nAdd tests here.\n".into()), content: Some("# Tests\n\nAdd tests here.\n".into()),
patch: None,
base_sha256: None,
}, },
], ],
}); });

View File

@ -133,7 +133,7 @@ pub fn apply_actions(app: AppHandle, payload: ApplyPayload) -> ApplyResult {
let mut sorted_actions = payload.actions.clone(); let mut sorted_actions = payload.actions.clone();
sort_actions_for_apply(&mut sorted_actions); sort_actions_for_apply(&mut sorted_actions);
for (i, action) in sorted_actions.iter().enumerate() { for (i, action) in sorted_actions.iter().enumerate() {
if let Err(e) = apply_one_action(&root, action) { if let Err(e) = apply_one_action(&root, action, None) {
let _ = rollback_tx(&app, &tx_id); let _ = rollback_tx(&app, &tx_id);
manifest.status = "rolled_back".into(); manifest.status = "rolled_back".into();
let _ = write_manifest(&app, &manifest); let _ = write_manifest(&app, &manifest);

View File

@ -9,11 +9,29 @@ use tauri::{AppHandle, Emitter, Manager};
use uuid::Uuid; use uuid::Uuid;
use crate::commands::get_project_profile::get_project_limits; use crate::commands::get_project_profile::get_project_limits;
use crate::tx::{normalize_content_for_write, safe_join, sort_actions_for_apply}; use crate::tx::{apply_one_action, sort_actions_for_apply};
use crate::types::{Action, ActionKind, ApplyOptions, ApplyTxResult, CheckStageResult}; use crate::types::{Action, ApplyOptions, ApplyTxResult, CheckStageResult};
const PROGRESS_EVENT: &str = "analyze_progress"; const PROGRESS_EVENT: &str = "analyze_progress";
fn extract_error_code(err: &str) -> String {
if err.starts_with("ERR_PATCH_NOT_UNIFIED") {
"ERR_PATCH_NOT_UNIFIED".into()
} else if err.starts_with("ERR_BASE_MISMATCH") {
"ERR_BASE_MISMATCH".into()
} else if err.starts_with("ERR_PATCH_APPLY_FAILED") {
"ERR_PATCH_APPLY_FAILED".into()
} else if err.starts_with("ERR_BASE_SHA256_INVALID") {
"ERR_BASE_SHA256_INVALID".into()
} else if err.starts_with("ERR_NON_UTF8_FILE") {
"ERR_NON_UTF8_FILE".into()
} else if err.starts_with("ERR_V2_UPDATE_EXISTING_FORBIDDEN") {
"ERR_V2_UPDATE_EXISTING_FORBIDDEN".into()
} else {
"APPLY_FAILED_ROLLED_BACK".into()
}
}
fn clip(s: String, n: usize) -> String { fn clip(s: String, n: usize) -> String {
if s.len() <= n { if s.len() <= n {
s s
@ -112,35 +130,6 @@ fn restore_snapshot(project_root: &Path, snap_dir: &Path) -> Result<(), String>
Ok(()) Ok(())
} }
fn apply_one_action(root: &Path, action: &Action) -> Result<(), String> {
let p = safe_join(root, &action.path)?;
match action.kind {
ActionKind::CreateFile | ActionKind::UpdateFile => {
let content = action.content.as_deref().unwrap_or("");
let normalized = normalize_content_for_write(content, &p);
if let Some(parent) = p.parent() {
fs::create_dir_all(parent).map_err(|e| e.to_string())?;
}
fs::write(&p, normalized.as_bytes()).map_err(|e| e.to_string())?;
Ok(())
}
ActionKind::DeleteFile => {
if p.exists() {
fs::remove_file(&p).map_err(|e| e.to_string())?;
}
Ok(())
}
ActionKind::CreateDir => {
fs::create_dir_all(&p).map_err(|e| e.to_string())
}
ActionKind::DeleteDir => {
if p.exists() {
fs::remove_dir_all(&p).map_err(|e| e.to_string())?;
}
Ok(())
}
}
}
fn run_cmd_allowlisted( fn run_cmd_allowlisted(
cwd: &Path, cwd: &Path,
@ -292,6 +281,7 @@ pub async fn apply_actions_tx(
checks: vec![], checks: vec![],
error: Some("path not found".into()), error: Some("path not found".into()),
error_code: Some("PATH_NOT_FOUND".into()), error_code: Some("PATH_NOT_FOUND".into()),
protocol_fallback_stage: None,
}; };
} }
@ -304,6 +294,7 @@ pub async fn apply_actions_tx(
checks: vec![], checks: vec![],
error: Some("confirmation required".into()), error: Some("confirmation required".into()),
error_code: Some("CONFIRM_REQUIRED".into()), error_code: Some("CONFIRM_REQUIRED".into()),
protocol_fallback_stage: None,
}; };
} }
@ -321,6 +312,7 @@ pub async fn apply_actions_tx(
limits.max_actions_per_tx limits.max_actions_per_tx
)), )),
error_code: Some("TOO_MANY_ACTIONS".into()), error_code: Some("TOO_MANY_ACTIONS".into()),
protocol_fallback_stage: None,
}; };
} }
@ -335,6 +327,7 @@ pub async fn apply_actions_tx(
checks: vec![], checks: vec![],
error: Some(format!("protected or non-text file: {}", rel)), error: Some(format!("protected or non-text file: {}", rel)),
error_code: Some("PROTECTED_PATH".into()), error_code: Some("PROTECTED_PATH".into()),
protocol_fallback_stage: None,
}; };
} }
} }
@ -353,6 +346,7 @@ pub async fn apply_actions_tx(
checks: vec![], checks: vec![],
error: Some(e), error: Some(e),
error_code: Some("SNAPSHOT_FAILED".into()), error_code: Some("SNAPSHOT_FAILED".into()),
protocol_fallback_stage: None,
}; };
} }
}; };
@ -361,8 +355,14 @@ pub async fn apply_actions_tx(
let mut actions = actions; let mut actions = actions;
sort_actions_for_apply(&mut actions); sort_actions_for_apply(&mut actions);
for a in &actions { for a in &actions {
if let Err(e) = apply_one_action(&root, a) { let protocol_override = options.protocol_version_override;
if let Err(e) = apply_one_action(&root, a, protocol_override) {
let _ = restore_snapshot(&root, &snap_dir); let _ = restore_snapshot(&root, &snap_dir);
let error_code = extract_error_code(&e);
let fallback_stage = crate::protocol::V2_FALLBACK_ERROR_CODES
.iter()
.any(|c| error_code == *c)
.then(|| "apply".to_string());
eprintln!("[APPLY_ROLLBACK] tx_id={} path={} reason={}", tx_id, path, e); eprintln!("[APPLY_ROLLBACK] tx_id={} path={} reason={}", tx_id, path, e);
return ApplyTxResult { return ApplyTxResult {
ok: false, ok: false,
@ -371,7 +371,8 @@ pub async fn apply_actions_tx(
rolled_back: true, rolled_back: true,
checks: vec![], checks: vec![],
error: Some(e), error: Some(e),
error_code: Some("APPLY_FAILED_ROLLED_BACK".into()), error_code: Some(error_code),
protocol_fallback_stage: fallback_stage,
}; };
} }
} }
@ -403,6 +404,7 @@ pub async fn apply_actions_tx(
checks, checks,
error: Some("autoCheck failed — rolled back".into()), error: Some("autoCheck failed — rolled back".into()),
error_code: Some("AUTO_CHECK_FAILED_ROLLED_BACK".into()), error_code: Some("AUTO_CHECK_FAILED_ROLLED_BACK".into()),
protocol_fallback_stage: None,
}; };
} }
} }
@ -425,6 +427,7 @@ pub async fn apply_actions_tx(
checks, checks,
error: None, error: None,
error_code: None, error_code: None,
protocol_fallback_stage: None,
} }
} }

View File

@ -86,6 +86,8 @@ pub async fn generate_actions_from_report(
content: Some( content: Some(
"# Project\n\n## Описание\n\nКратко опишите проект.\n\n## Запуск\n\n- dev: ...\n- build: ...\n\n## Структура\n\n- src/\n- tests/\n".into(), "# Project\n\n## Описание\n\nКратко опишите проект.\n\n## Запуск\n\n- dev: ...\n- build: ...\n\n## Структура\n\n- src/\n- tests/\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
} }
} }
@ -102,6 +104,8 @@ pub async fn generate_actions_from_report(
content: Some( content: Some(
"node_modules/\ndist/\nbuild/\n.next/\ncoverage/\n.env\n.env.*\n.DS_Store\n.target/\n".into(), "node_modules/\ndist/\nbuild/\n.next/\ncoverage/\n.env\n.env.*\n.DS_Store\n.target/\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
} }
} }
@ -116,6 +120,8 @@ pub async fn generate_actions_from_report(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: rel_path, path: rel_path,
content: Some("MIT License\n\nCopyright (c) <year> <copyright holders>\n".into()), content: Some("MIT License\n\nCopyright (c) <year> <copyright holders>\n".into()),
patch: None,
base_sha256: None,
}); });
} }
} }
@ -128,6 +134,8 @@ pub async fn generate_actions_from_report(
kind: ActionKind::CreateDir, kind: ActionKind::CreateDir,
path: dir_path, path: dir_path,
content: None, content: None,
patch: None,
base_sha256: None,
}); });
} }
let keep_path = rel("tests/.gitkeep"); let keep_path = rel("tests/.gitkeep");
@ -136,6 +144,8 @@ pub async fn generate_actions_from_report(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: keep_path, path: keep_path,
content: Some("".into()), content: Some("".into()),
patch: None,
base_sha256: None,
}); });
} }
} }

View File

@ -25,16 +25,12 @@ use uuid::Uuid;
const SCHEMA_RAW: &str = include_str!("../../config/llm_response_schema.json"); const SCHEMA_RAW: &str = include_str!("../../config/llm_response_schema.json");
const SCHEMA_V2_RAW: &str = include_str!("../../config/llm_response_schema_v2.json"); const SCHEMA_V2_RAW: &str = include_str!("../../config/llm_response_schema_v2.json");
fn protocol_version() -> u32 { fn protocol_version(override_version: Option<u32>) -> u32 {
std::env::var("PAPAYU_PROTOCOL_VERSION") crate::protocol::protocol_version(override_version)
.ok()
.and_then(|s| s.trim().parse().ok())
.filter(|v| *v == 1 || *v == 2)
.unwrap_or(1)
} }
pub(crate) fn schema_hash() -> String { pub(crate) fn schema_hash() -> String {
schema_hash_for_version(protocol_version()) schema_hash_for_version(protocol_version(None))
} }
pub(crate) fn schema_hash_for_version(version: u32) -> String { pub(crate) fn schema_hash_for_version(version: u32) -> String {
@ -49,7 +45,7 @@ pub(crate) fn schema_hash_for_version(version: u32) -> String {
} }
fn current_schema_version() -> u32 { fn current_schema_version() -> u32 {
protocol_version() protocol_version(None)
} }
#[derive(serde::Serialize)] #[derive(serde::Serialize)]
@ -270,16 +266,72 @@ pub const FIX_PLAN_SYSTEM_PROMPT: &str = r#"Ты — инженерный асс
- context_requests: [{ type: "read_file"|"search"|"logs"|"env", path?, start_line?, end_line?, query?, glob?, source?, last_n? }] - context_requests: [{ type: "read_file"|"search"|"logs"|"env", path?, start_line?, end_line?, query?, glob?, source?, last_n? }]
- memory_patch: object (только ключи из whitelist: user.*, project.*)"#; - memory_patch: object (только ключи из whitelist: user.*, project.*)"#;
/// Возвращает system prompt по режиму (PAPAYU_LLM_MODE: chat | fixit | fix-plan). /// System prompt v2: Protocol v2 (PATCH_FILE, base_sha256, object-only).
pub const FIX_PLAN_SYSTEM_PROMPT_V2: &str = r#"Ты — инженерный ассистент внутри программы, работающей по Protocol v2.
Формат ответа:
- Всегда возвращай ТОЛЬКО валидный JSON, строго по JSON Schema v2.
- Корневой объект, поле "actions" обязательно.
- Никаких комментариев, пояснений или текста вне JSON.
Правила изменений файлов:
- UPDATE_FILE запрещён для существующих файлов используй PATCH_FILE.
- Для изменения существующего файла ИСПОЛЬЗУЙ ТОЛЬКО PATCH_FILE.
- PATCH_FILE ОБЯЗАН содержать:
- base_sha256 точный sha256 текущей версии файла (из контекста)
- patch unified diff
- Если base_sha256 не совпадает или контекста недостаточно верни PLAN и запроси context_requests.
Режимы:
- PLAN: actions ДОЛЖЕН быть пустым массивом [], summary обязателен.
- APPLY: если изменений нет actions=[], summary НАЧИНАЕТСЯ с "NO_CHANGES:"; иначе actions непустой.
Контекст:
- Для каждого файла предоставляется его sha256 в формате FILE[path] (sha256=...).
- base_sha256 бери из строки FILE[path] (sha256=...) в контексте.
PATCH_FILE правила:
- Патч должен быть минимальным: меняй только нужные строки.
- Каждый @@ hunk должен иметь 13 строки контекста до/после изменения.
- Не делай массовых форматирований и EOL-изменений.
Когда нельзя PATCH_FILE:
- Если файл не UTF-8 или слишком большой/генерируемый верни PLAN (actions=[]) и запроси альтернативу.
Запреты:
- Не добавляй новых полей. Не изменяй защищённые пути. Не придумывай base_sha256."#;
/// Возвращает system prompt по режиму и protocol_version.
fn get_system_prompt_for_mode() -> &'static str { fn get_system_prompt_for_mode() -> &'static str {
let mode = std::env::var("PAPAYU_LLM_MODE").unwrap_or_else(|_| "chat".into()); let mode = std::env::var("PAPAYU_LLM_MODE").unwrap_or_else(|_| "chat".into());
let use_v2 = protocol_version(None) == 2;
match mode.trim().to_lowercase().as_str() { match mode.trim().to_lowercase().as_str() {
"fixit" | "fix-it" | "fix_it" => FIXIT_SYSTEM_PROMPT, "fixit" | "fix-it" | "fix_it" => {
"fix-plan" | "fix_plan" => FIX_PLAN_SYSTEM_PROMPT, if use_v2 {
FIX_PLAN_SYSTEM_PROMPT_V2
} else {
FIXIT_SYSTEM_PROMPT
}
}
"fix-plan" | "fix_plan" => {
if use_v2 {
FIX_PLAN_SYSTEM_PROMPT_V2
} else {
FIX_PLAN_SYSTEM_PROMPT
}
}
_ => CHAT_SYSTEM_PROMPT, _ => CHAT_SYSTEM_PROMPT,
} }
} }
/// Проверяет, нужен ли fallback на v1 для APPLY.
/// repair_attempt: 0 = первый retry (repair-first для PATCH_APPLY/UPDATE_EXISTING), 1 = repair уже пробовали.
pub fn is_protocol_fallback_applicable(apply_error_code: &str, repair_attempt: u32) -> bool {
crate::protocol::protocol_default() == 2
&& crate::protocol::protocol_fallback_enabled()
&& crate::protocol::should_fallback_to_v1(apply_error_code, repair_attempt)
}
/// Проверяет, включён ли LLM-планировщик (задан URL). /// Проверяет, включён ли LLM-планировщик (задан URL).
pub fn is_llm_configured() -> bool { pub fn is_llm_configured() -> bool {
std::env::var("PAPAYU_LLM_API_URL") std::env::var("PAPAYU_LLM_API_URL")
@ -423,9 +475,74 @@ const REPAIR_PROMPT_PLAN_ACTIONS_MUST_BE_EMPTY: &str = r#"
Верни объект с "actions": [] и "summary" (диагноз + план шагов). Верни объект с "actions": [] и "summary" (диагноз + план шагов).
"#; "#;
/// v2 repair hints для PATCH_FILE (для repair flow / UI)
#[allow(dead_code)]
const REPAIR_ERR_PATCH_NOT_UNIFIED: &str = "ERR_PATCH_NOT_UNIFIED: patch должен быть unified diff (---/+++ и @@ hunks)";
#[allow(dead_code)]
const REPAIR_ERR_BASE_MISMATCH: &str = "ERR_BASE_MISMATCH: файл изменился, верни PLAN и запроси read_file заново";
#[allow(dead_code)]
const REPAIR_ERR_PATCH_APPLY_FAILED: &str = "ERR_PATCH_APPLY_FAILED: патч не применяется, верни PLAN и запроси больше контекста вокруг изменения";
#[allow(dead_code)]
const REPAIR_ERR_V2_UPDATE_EXISTING_FORBIDDEN: &str = "ERR_V2_UPDATE_EXISTING_FORBIDDEN: сгенерируй PATCH_FILE вместо UPDATE_FILE для существующего файла";
/// Шаблон для repair с подстановкой path и sha256 (ERR_BASE_SHA256_NOT_FROM_CONTEXT).
fn repair_err_base_sha256_not_from_context(path: &str, sha256: &str) -> String {
format!(
r#"ERR_BASE_SHA256_NOT_FROM_CONTEXT:
Для PATCH_FILE по пути "{}" base_sha256 должен быть ровно sha256 из контекста.
Используй это значение base_sha256: {}
Верни ТОЛЬКО валидный JSON по схеме v2.
Для изменения файла используй PATCH_FILE с base_sha256={} и unified diff в поле patch.
НЕ добавляй новых полей."#,
path, sha256, sha256
)
}
/// Строит repair prompt с конкретным sha256 из контекста (v2 + PATCH_FILE).
/// Возвращает Some((prompt, paths)), если нашли sha для PATCH_FILE с неверным base_sha256.
pub fn build_v2_patch_repair_prompt_with_sha(
last_plan_context: &str,
validated_json: &serde_json::Value,
) -> Option<(String, Vec<String>)> {
use crate::context;
use crate::patch;
if protocol_version(None) != 2 {
return None;
}
let actions = validated_json
.get("proposed_changes")
.and_then(|pc| pc.get("actions"))
.or_else(|| validated_json.get("actions"))
.and_then(|a| a.as_array())?;
let sha_map = context::extract_file_sha256_from_context(last_plan_context);
for a in actions {
let obj = a.as_object()?;
let kind = obj.get("kind").and_then(|k| k.as_str()).unwrap_or("");
if kind.to_uppercase() != "PATCH_FILE" {
continue;
}
let path = obj.get("path").and_then(|p| p.as_str())?;
let sha_ctx = sha_map.get(path)?;
let base = obj.get("base_sha256").and_then(|b| b.as_str());
let needs_repair = match base {
None => true,
Some(b) if !patch::is_valid_sha256_hex(b) => true,
Some(b) if b != sha_ctx.as_str() => true,
_ => false,
};
if needs_repair {
let prompt = repair_err_base_sha256_not_from_context(path, sha_ctx);
return Some((prompt, vec![path.to_string()]));
}
}
None
}
/// Компилирует JSON Schema для локальной валидации (v1 или v2 по protocol_version). /// Компилирует JSON Schema для локальной валидации (v1 или v2 по protocol_version).
fn compiled_response_schema() -> Option<JSONSchema> { fn compiled_response_schema() -> Option<JSONSchema> {
let raw = if protocol_version() == 2 { let raw = if protocol_version(None) == 2 {
SCHEMA_V2_RAW SCHEMA_V2_RAW
} else { } else {
SCHEMA_RAW SCHEMA_RAW
@ -445,6 +562,18 @@ fn validate_json_against_schema(value: &serde_json::Value) -> Result<(), String>
}) })
} }
/// Валидация против схемы конкретной версии (для golden traces).
#[allow(dead_code)]
fn compiled_schema_for_version(version: u32) -> Option<JSONSchema> {
let raw = if version == 2 {
SCHEMA_V2_RAW
} else {
SCHEMA_RAW
};
let schema: serde_json::Value = serde_json::from_str(raw).ok()?;
JSONSchema::options().compile(&schema).ok()
}
/// Извлекает JSON из ответа (убирает обёртку ```json ... ``` при наличии). /// Извлекает JSON из ответа (убирает обёртку ```json ... ``` при наличии).
fn extract_json_from_content(content: &str) -> Result<&str, String> { fn extract_json_from_content(content: &str) -> Result<&str, String> {
let content = content.trim(); let content = content.trim();
@ -521,7 +650,7 @@ fn validate_path(path: &str, idx: usize) -> Result<(), String> {
Ok(()) Ok(())
} }
/// Проверяет конфликты действий на один path (CREATE+UPDATE, DELETE+UPDATE и т.д.). /// Проверяет конфликты действий на один path (CREATE+UPDATE, PATCH+UPDATE, DELETE+UPDATE и т.д.).
fn validate_action_conflicts(actions: &[Action]) -> Result<(), String> { fn validate_action_conflicts(actions: &[Action]) -> Result<(), String> {
use std::collections::HashMap; use std::collections::HashMap;
let mut by_path: HashMap<String, Vec<ActionKind>> = HashMap::new(); let mut by_path: HashMap<String, Vec<ActionKind>> = HashMap::new();
@ -532,6 +661,7 @@ fn validate_action_conflicts(actions: &[Action]) -> Result<(), String> {
for (path, kinds) in by_path { for (path, kinds) in by_path {
let has_create = kinds.contains(&ActionKind::CreateFile); let has_create = kinds.contains(&ActionKind::CreateFile);
let has_update = kinds.contains(&ActionKind::UpdateFile); let has_update = kinds.contains(&ActionKind::UpdateFile);
let has_patch = kinds.contains(&ActionKind::PatchFile);
let has_delete_file = kinds.contains(&ActionKind::DeleteFile); let has_delete_file = kinds.contains(&ActionKind::DeleteFile);
let has_delete_dir = kinds.contains(&ActionKind::DeleteDir); let has_delete_dir = kinds.contains(&ActionKind::DeleteDir);
if has_create && has_update { if has_create && has_update {
@ -540,6 +670,19 @@ fn validate_action_conflicts(actions: &[Action]) -> Result<(), String> {
path path
)); ));
} }
// PATCH_FILE конфликтует с CREATE/UPDATE/DELETE на тот же path
if has_patch && (has_create || has_update) {
return Err(format!(
"ERR_ACTION_CONFLICT: path '{}' has PATCH_FILE and CREATE/UPDATE",
path
));
}
if has_patch && (has_delete_file || has_delete_dir) {
return Err(format!(
"ERR_ACTION_CONFLICT: path '{}' has PATCH_FILE and DELETE",
path
));
}
if (has_delete_file || has_delete_dir) && (has_create || has_update) { if (has_delete_file || has_delete_dir) && (has_create || has_update) {
return Err(format!( return Err(format!(
"ERR_ACTION_CONFLICT: path '{}' has conflicting DELETE and CREATE/UPDATE", "ERR_ACTION_CONFLICT: path '{}' has conflicting DELETE and CREATE/UPDATE",
@ -554,15 +697,15 @@ fn validate_action_conflicts(actions: &[Action]) -> Result<(), String> {
fn extract_files_read_from_plan_context(plan_context: &str) -> std::collections::HashSet<String> { fn extract_files_read_from_plan_context(plan_context: &str) -> std::collections::HashSet<String> {
let mut paths = std::collections::HashSet::new(); let mut paths = std::collections::HashSet::new();
let mut search = plan_context; let mut search = plan_context;
// FILE[path]: — из fulfill_context_requests // FILE[path]: или FILE[path] (sha256=...): — из fulfill_context_requests
while let Some(start) = search.find("FILE[") { while let Some(start) = search.find("FILE[") {
search = &search[start + 5..]; search = &search[start + 5..];
if let Some(end) = search.find("]:") { if let Some(end) = search.find(']') {
let path = search[..end].trim().replace('\\', "/"); let path = search[..end].trim().replace('\\', "/");
if !path.is_empty() { if !path.is_empty() {
paths.insert(path); paths.insert(path);
} }
search = &search[end + 2..]; search = &search[end + 1..];
} else { } else {
break; break;
} }
@ -584,7 +727,34 @@ fn extract_files_read_from_plan_context(plan_context: &str) -> std::collections:
paths paths
} }
/// APPLY-режим: каждый UPDATE_FILE должен ссылаться на файл, прочитанный в plan. /// v2: UPDATE_FILE запрещён для существующих файлов — используй PATCH_FILE.
fn validate_v2_update_existing_forbidden(
project_root: &std::path::Path,
actions: &[Action],
) -> Result<(), String> {
if protocol_version(None) != 2 {
return Ok(());
}
for (i, a) in actions.iter().enumerate() {
if a.kind != ActionKind::UpdateFile {
continue;
}
let p = match crate::tx::safe_join(project_root, &a.path) {
Ok(p) => p,
Err(_) => continue,
};
if p.is_file() {
return Err(format!(
"ERR_V2_UPDATE_EXISTING_FORBIDDEN: UPDATE_FILE path '{}' существует (actions[{}]). \
В v2 используй PATCH_FILE для существующих файлов. Сгенерируй PATCH_FILE.",
a.path, i
));
}
}
Ok(())
}
/// APPLY-режим: UPDATE_FILE и PATCH_FILE должны ссылаться на файл, прочитанный в plan.
fn validate_update_without_base( fn validate_update_without_base(
actions: &[Action], actions: &[Action],
plan_context: Option<&str>, plan_context: Option<&str>,
@ -592,13 +762,18 @@ fn validate_update_without_base(
let Some(ctx) = plan_context else { return Ok(()) }; let Some(ctx) = plan_context else { return Ok(()) };
let read_paths = extract_files_read_from_plan_context(ctx); let read_paths = extract_files_read_from_plan_context(ctx);
for (i, a) in actions.iter().enumerate() { for (i, a) in actions.iter().enumerate() {
if a.kind == ActionKind::UpdateFile { if a.kind == ActionKind::UpdateFile || a.kind == ActionKind::PatchFile {
let path = a.path.replace('\\', "/").trim().to_string(); let path = a.path.replace('\\', "/").trim().to_string();
if !read_paths.contains(&path) { if !read_paths.contains(&path) {
let kind_str = if a.kind == ActionKind::PatchFile {
"PATCH_FILE"
} else {
"UPDATE_FILE"
};
return Err(format!( return Err(format!(
"ERR_UPDATE_WITHOUT_BASE: UPDATE_FILE path '{}' not read in plan (actions[{}]). \ "ERR_UPDATE_WITHOUT_BASE: {} path '{}' not read in plan (actions[{}]). \
В PLAN-цикле должен быть context_requests.read_file для этого path.", В PLAN-цикле должен быть context_requests.read_file для этого path.",
path, i kind_str, path, i
)); ));
} }
} }
@ -673,6 +848,29 @@ fn validate_actions(actions: &[Action]) -> Result<(), String> {
validate_content(content, i)?; validate_content(content, i)?;
total_bytes += content.len(); total_bytes += content.len();
} }
ActionKind::PatchFile => {
let patch = a.patch.as_deref().unwrap_or("");
let base = a.base_sha256.as_deref().unwrap_or("");
if patch.trim().is_empty() {
return Err(format!(
"actions[{}].patch required for PATCH_FILE (ERR_PATCH_REQUIRED)",
i
));
}
if !crate::patch::looks_like_unified_diff(patch) {
return Err(format!(
"actions[{}].patch is not unified diff (ERR_PATCH_NOT_UNIFIED)",
i
));
}
if !crate::patch::is_valid_sha256_hex(base) {
return Err(format!(
"actions[{}].base_sha256 invalid (64 hex chars) (ERR_BASE_SHA256_INVALID)",
i
));
}
total_bytes += a.patch.as_ref().map(|p| p.len()).unwrap_or(0);
}
_ => {} _ => {}
} }
} }
@ -703,6 +901,7 @@ fn parse_actions_from_json(json_str: &str) -> Result<Vec<Action>, String> {
"CREATE_FILE" => ActionKind::CreateFile, "CREATE_FILE" => ActionKind::CreateFile,
"CREATE_DIR" => ActionKind::CreateDir, "CREATE_DIR" => ActionKind::CreateDir,
"UPDATE_FILE" => ActionKind::UpdateFile, "UPDATE_FILE" => ActionKind::UpdateFile,
"PATCH_FILE" => ActionKind::PatchFile,
"DELETE_FILE" => ActionKind::DeleteFile, "DELETE_FILE" => ActionKind::DeleteFile,
"DELETE_DIR" => ActionKind::DeleteDir, "DELETE_DIR" => ActionKind::DeleteDir,
_ => ActionKind::CreateFile, _ => ActionKind::CreateFile,
@ -712,11 +911,16 @@ fn parse_actions_from_json(json_str: &str) -> Result<Vec<Action>, String> {
.and_then(|p| p.as_str()) .and_then(|p| p.as_str())
.map(|s| s.to_string()) .map(|s| s.to_string())
.unwrap_or_else(|| format!("unknown_{}", i)); .unwrap_or_else(|| format!("unknown_{}", i));
let content = obj let content = obj.get("content").and_then(|c| c.as_str()).map(|s| s.to_string());
.get("content") let patch = obj.get("patch").and_then(|p| p.as_str()).map(|s| s.to_string());
.and_then(|c| c.as_str()) let base_sha256 = obj.get("base_sha256").and_then(|b| b.as_str()).map(|s| s.to_string());
.map(|s| s.to_string()); actions.push(Action {
actions.push(Action { kind, path, content }); kind,
path,
content,
patch,
base_sha256,
});
} }
Ok(actions) Ok(actions)
} }
@ -770,6 +974,7 @@ const MAX_CONTEXT_ROUNDS: u32 = 2;
/// Автосбор контекста: env + project prefs в начало user message; при context_requests — до MAX_CONTEXT_ROUNDS раундов. /// Автосбор контекста: env + project prefs в начало user message; при context_requests — до MAX_CONTEXT_ROUNDS раундов.
/// output_format_override: "plan" | "apply" — для двухфазного Plan→Apply. /// output_format_override: "plan" | "apply" — для двухфазного Plan→Apply.
/// last_plan_for_apply, last_context_for_apply: при переходе из Plan в Apply (user сказал "ok"). /// last_plan_for_apply, last_context_for_apply: при переходе из Plan в Apply (user сказал "ok").
/// apply_error_for_repair: (error_code, validated_json) при ретрае после ERR_BASE_MISMATCH/ERR_BASE_SHA256_INVALID.
const DEFAULT_MAX_TOKENS: u32 = 16384; const DEFAULT_MAX_TOKENS: u32 = 16384;
pub async fn plan( pub async fn plan(
@ -784,9 +989,21 @@ pub async fn plan(
output_format_override: Option<&str>, output_format_override: Option<&str>,
last_plan_for_apply: Option<&str>, last_plan_for_apply: Option<&str>,
last_context_for_apply: Option<&str>, last_context_for_apply: Option<&str>,
apply_error_for_repair: Option<(&str, &str)>,
force_protocol_version: Option<u32>,
apply_error_stage: Option<&str>,
apply_repair_attempt: Option<u32>,
online_context_md: Option<&str>,
online_context_sources: Option<&[String]>,
online_fallback_executed: Option<bool>,
online_fallback_reason: Option<&str>,
) -> Result<AgentPlan, String> { ) -> Result<AgentPlan, String> {
let trace_id = Uuid::new_v4().to_string(); let trace_id = Uuid::new_v4().to_string();
let effective_protocol = force_protocol_version
.filter(|v| *v == 1 || *v == 2)
.unwrap_or_else(|| crate::protocol::protocol_default());
let _guard = crate::protocol::set_protocol_version(effective_protocol);
let api_url = std::env::var("PAPAYU_LLM_API_URL").map_err(|_| "PAPAYU_LLM_API_URL not set")?; let api_url = std::env::var("PAPAYU_LLM_API_URL").map_err(|_| "PAPAYU_LLM_API_URL not set")?;
let api_url = api_url.trim(); let api_url = api_url.trim();
if api_url.is_empty() { if api_url.is_empty() {
@ -816,12 +1033,88 @@ pub async fn plan(
project_root, project_root,
&format!("{}\n{}", user_goal, report_json), &format!("{}\n{}", user_goal, report_json),
); );
let mut user_message = format!("{}{}{}", base_context, prompt_body, auto_from_message); let rest_context = format!("{}{}{}", base_context, prompt_body, auto_from_message);
let mut online_block_result: Option<crate::online_research::OnlineBlockResult> = None;
let mut online_context_dropped = false;
let mut user_message = rest_context.clone();
if let Some(md) = online_context_md {
if !md.trim().is_empty() {
let max_chars = crate::online_research::online_context_max_chars();
let max_sources = crate::online_research::online_context_max_sources();
let rest_chars = rest_context.chars().count();
let max_total = context::context_max_total_chars();
let priority0_reserved = 4096usize;
let effective_max = crate::online_research::effective_online_max_chars(
rest_chars,
max_total,
priority0_reserved,
);
let effective_max = if effective_max > 0 {
effective_max.min(max_chars)
} else {
0
};
let sources: Vec<String> = online_context_sources
.map(|s| s.to_vec())
.unwrap_or_default();
if effective_max >= 512 {
let result = crate::online_research::build_online_context_block(
md,
&sources,
effective_max,
max_sources,
);
if !result.dropped {
user_message = format!("{}{}", result.block, rest_context);
online_block_result = Some(result);
} else {
online_context_dropped = true;
}
} else {
online_context_dropped = true;
}
}
}
let mut repair_injected_paths: Vec<String> = Vec::new();
// Переход Plan→Apply: инжектируем сохранённый план и контекст // Переход Plan→Apply: инжектируем сохранённый план и контекст
if output_format_override == Some("apply") { if output_format_override == Some("apply") {
if let Some(plan_json) = last_plan_for_apply { if let Some(plan_json) = last_plan_for_apply {
let mut apply_prompt = String::from("\n\n--- РЕЖИМ APPLY ---\nПользователь подтвердил план. Применяй изменения согласно плану ниже. Верни actions с конкретными правками файлов.\n\nПЛАН:\n"); let mut apply_prompt = String::new();
// Repair после ERR_BASE_MISMATCH/ERR_BASE_SHA256_INVALID: подставляем sha256 из контекста
if let Some((code, validated_json_str)) = apply_error_for_repair {
let is_base_error = code == "ERR_BASE_MISMATCH" || code == "ERR_BASE_SHA256_INVALID";
if is_base_error {
if let Some(ctx) = last_context_for_apply {
if let Ok(val) = serde_json::from_str::<serde_json::Value>(validated_json_str) {
if let Some((repair, paths)) = build_v2_patch_repair_prompt_with_sha(ctx, &val) {
repair_injected_paths = paths;
apply_prompt.push_str("\n\n--- REPAIR (ERR_BASE_SHA256_NOT_FROM_CONTEXT) ---\n");
apply_prompt.push_str(&repair);
apply_prompt.push_str("\n\nRaw output предыдущего ответа:\n");
apply_prompt.push_str(validated_json_str);
apply_prompt.push_str("\n\n");
}
}
}
}
// Repair-first для ERR_PATCH_APPLY_FAILED и ERR_V2_UPDATE_EXISTING_FORBIDDEN (без fallback)
if force_protocol_version != Some(1)
&& (code == "ERR_PATCH_APPLY_FAILED" || code == "ERR_V2_UPDATE_EXISTING_FORBIDDEN")
{
if code == "ERR_PATCH_APPLY_FAILED" {
apply_prompt.push_str("\n\n--- REPAIR (ERR_PATCH_APPLY_FAILED) ---\n");
apply_prompt.push_str("Увеличь контекст hunks до 3 строк, не меняй соседние блоки. Верни PATCH_FILE с исправленным patch.\n\n");
} else if code == "ERR_V2_UPDATE_EXISTING_FORBIDDEN" {
apply_prompt.push_str("\n\n--- REPAIR (ERR_V2_UPDATE_EXISTING_FORBIDDEN) ---\n");
apply_prompt.push_str("Сгенерируй PATCH_FILE вместо UPDATE_FILE для существующих файлов. Используй base_sha256 из контекста.\n\n");
}
apply_prompt.push_str("Raw output предыдущего ответа:\n");
apply_prompt.push_str(validated_json_str);
apply_prompt.push_str("\n\n");
}
}
apply_prompt.push_str("\n\n--- РЕЖИМ APPLY ---\nПользователь подтвердил план. Применяй изменения согласно плану ниже. Верни actions с конкретными правками файлов.\n\nПЛАН:\n");
apply_prompt.push_str(plan_json); apply_prompt.push_str(plan_json);
if let Some(ctx) = last_context_for_apply { if let Some(ctx) = last_context_for_apply {
apply_prompt.push_str("\n\nСОБРАННЫЙ_КОНТЕКСТ:\n"); apply_prompt.push_str("\n\nСОБРАННЫЙ_КОНТЕКСТ:\n");
@ -943,12 +1236,20 @@ pub async fn plan(
} }
} }
let resp = req.send().await.map_err(|e| { let resp = match req.send().await {
if e.is_timeout() { Ok(r) => r,
log_llm_event(&trace_id, "LLM_REQUEST_TIMEOUT", &[("timeout_sec", timeout_sec.to_string())]); Err(e) => {
let timeout = e.is_timeout();
if timeout {
log_llm_event(&trace_id, "LLM_REQUEST_TIMEOUT", &[("timeout_sec", timeout_sec.to_string())]);
}
return Err(format!(
"{}: Request: {}",
if timeout { "LLM_REQUEST_TIMEOUT" } else { "LLM_REQUEST" },
e
));
} }
format!("Request: {}", e) };
})?;
let status = resp.status(); let status = resp.status();
let text = resp.text().await.map_err(|e| format!("Response body: {}", e))?; let text = resp.text().await.map_err(|e| format!("Response body: {}", e))?;
@ -1005,7 +1306,7 @@ pub async fn plan(
Err(e) => { Err(e) => {
let mut trace_val = serde_json::json!({ "trace_id": trace_id, "raw_content": content, "error": e, "event": "VALIDATION_FAILED" }); let mut trace_val = serde_json::json!({ "trace_id": trace_id, "raw_content": content, "error": e, "event": "VALIDATION_FAILED" });
write_trace(path, &trace_id, &mut trace_val); write_trace(path, &trace_id, &mut trace_val);
return Err(e); return Err(format!("ERR_JSON_EXTRACT: {}", e));
} }
}; };
@ -1020,7 +1321,7 @@ pub async fn plan(
repair_done = true; repair_done = true;
continue; continue;
} }
Err(e) => return Err(format!("JSON parse: {}", e)), Err(e) => return Err(format!("ERR_JSON_PARSE: JSON parse: {}", e)),
}; };
// Локальная валидация схемы (best-effort при strict выкл; обязательна при strict вкл) // Локальная валидация схемы (best-effort при strict выкл; обязательна при strict вкл)
@ -1036,7 +1337,7 @@ pub async fn plan(
} }
let mut trace_val = serde_json::json!({ "trace_id": trace_id, "raw_content": content, "validated_json": json_str, "error": e, "event": "VALIDATION_FAILED" }); let mut trace_val = serde_json::json!({ "trace_id": trace_id, "raw_content": content, "validated_json": json_str, "error": e, "event": "VALIDATION_FAILED" });
write_trace(path, &trace_id, &mut trace_val); write_trace(path, &trace_id, &mut trace_val);
return Err(e); return Err(format!("ERR_SCHEMA_VALIDATION: {}", e));
} }
let parsed = parse_plan_response(json_str)?; let parsed = parse_plan_response(json_str)?;
@ -1102,7 +1403,7 @@ pub async fn plan(
break (parsed.actions, parsed.summary_override, json_str.to_string(), user_message.clone()); break (parsed.actions, parsed.summary_override, json_str.to_string(), user_message.clone());
}; };
// Строгая валидация: path, content, конфликты, UPDATE_WITHOUT_BASE // Строгая валидация: path, content, конфликты, UPDATE_WITHOUT_BASE, v2 UPDATE_EXISTING_FORBIDDEN
if let Err(e) = validate_actions(&last_actions) { if let Err(e) = validate_actions(&last_actions) {
log_llm_event(&trace_id, "VALIDATION_FAILED", &[("code", "ERR_ACTIONS".to_string()), ("reason", e.clone())]); log_llm_event(&trace_id, "VALIDATION_FAILED", &[("code", "ERR_ACTIONS".to_string()), ("reason", e.clone())]);
let mut trace_val = serde_json::json!({ "trace_id": trace_id, "validated_json": last_plan_json, "error": e, "event": "VALIDATION_FAILED" }); let mut trace_val = serde_json::json!({ "trace_id": trace_id, "validated_json": last_plan_json, "error": e, "event": "VALIDATION_FAILED" });
@ -1119,6 +1420,12 @@ pub async fn plan(
write_trace(path, &trace_id, &mut trace_val); write_trace(path, &trace_id, &mut trace_val);
return Err(e); return Err(e);
} }
if let Err(e) = validate_v2_update_existing_forbidden(project_root, &last_actions) {
log_llm_event(&trace_id, "VALIDATION_FAILED", &[("code", "ERR_V2_UPDATE_EXISTING_FORBIDDEN".to_string()), ("reason", e.clone())]);
let mut trace_val = serde_json::json!({ "trace_id": trace_id, "validated_json": last_plan_json, "error": e, "event": "VALIDATION_FAILED" });
write_trace(path, &trace_id, &mut trace_val);
return Err(e);
}
} }
let mode_for_plan_json = output_format_override let mode_for_plan_json = output_format_override
@ -1136,7 +1443,38 @@ pub async fn plan(
"provider": provider, "provider": provider,
"actions_count": last_actions.len(), "actions_count": last_actions.len(),
"validated_json": last_plan_json, "validated_json": last_plan_json,
"protocol_default": crate::protocol::protocol_default(),
}); });
if let Some((_, _)) = apply_error_for_repair {
trace_val["protocol_repair_attempt"] = serde_json::json!(apply_repair_attempt.unwrap_or(0));
}
if force_protocol_version == Some(1) {
trace_val["protocol_attempts"] = serde_json::json!(["v2", "v1"]);
trace_val["protocol_fallback_reason"] = serde_json::json!(apply_error_for_repair.as_ref().map(|(c, _)| *c).unwrap_or("unknown"));
trace_val["protocol_fallback_attempted"] = serde_json::json!(true);
trace_val["protocol_fallback_stage"] = serde_json::json!(apply_error_stage.unwrap_or("apply"));
}
if !repair_injected_paths.is_empty() {
trace_val["repair_injected_sha256"] = serde_json::json!(true);
trace_val["repair_injected_paths"] = serde_json::json!(repair_injected_paths);
}
if online_fallback_executed == Some(true) {
trace_val["online_fallback_executed"] = serde_json::json!(true);
if let Some(reason) = online_fallback_reason {
trace_val["online_fallback_reason"] = serde_json::json!(reason);
}
}
if let Some(ref r) = online_block_result {
trace_val["online_context_injected"] = serde_json::json!(true);
trace_val["online_context_chars"] = serde_json::json!(r.chars_used);
trace_val["online_context_sources_count"] = serde_json::json!(r.sources_count);
if r.was_truncated {
trace_val["online_context_truncated"] = serde_json::json!(true);
}
}
if online_context_dropped {
trace_val["online_context_dropped"] = serde_json::json!(true);
}
if let Some(ref cs) = last_context_stats { if let Some(ref cs) = last_context_stats {
trace_val["context_stats"] = serde_json::json!({ trace_val["context_stats"] = serde_json::json!({
"context_files_count": cs.context_files_count, "context_files_count": cs.context_files_count,
@ -1169,19 +1507,39 @@ pub async fn plan(
error_code: None, error_code: None,
plan_json, plan_json,
plan_context, plan_context,
protocol_version_used: Some(effective_protocol),
online_fallback_suggested: None,
online_context_used: Some(online_block_result.is_some()),
}) })
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{ use super::{
extract_files_read_from_plan_context, parse_actions_from_json, schema_hash, schema_hash_for_version, build_v2_patch_repair_prompt_with_sha, compiled_schema_for_version,
validate_actions, validate_update_without_base, FIX_PLAN_SYSTEM_PROMPT, LLM_PLAN_SCHEMA_VERSION, extract_files_read_from_plan_context, is_protocol_fallback_applicable, parse_actions_from_json,
schema_hash, schema_hash_for_version,
validate_actions, validate_update_without_base, validate_v2_update_existing_forbidden,
FIX_PLAN_SYSTEM_PROMPT, LLM_PLAN_SCHEMA_VERSION,
}; };
use crate::types::{Action, ActionKind}; use crate::types::{Action, ActionKind};
use std::fs; use std::fs;
use std::path::Path; use std::path::Path;
#[test]
fn test_protocol_fallback_applicable() {
std::env::set_var("PAPAYU_PROTOCOL_DEFAULT", "2");
std::env::set_var("PAPAYU_PROTOCOL_FALLBACK_TO_V1", "1");
assert!(!is_protocol_fallback_applicable("ERR_PATCH_APPLY_FAILED", 0)); // repair-first
assert!(is_protocol_fallback_applicable("ERR_PATCH_APPLY_FAILED", 1));
assert!(is_protocol_fallback_applicable("ERR_NON_UTF8_FILE", 0)); // immediate fallback
assert!(!is_protocol_fallback_applicable("ERR_V2_UPDATE_EXISTING_FORBIDDEN", 0)); // repair-first
assert!(is_protocol_fallback_applicable("ERR_V2_UPDATE_EXISTING_FORBIDDEN", 1));
assert!(!is_protocol_fallback_applicable("ERR_BASE_MISMATCH", 0)); // sha repair, not fallback
std::env::remove_var("PAPAYU_PROTOCOL_DEFAULT");
std::env::remove_var("PAPAYU_PROTOCOL_FALLBACK_TO_V1");
}
#[test] #[test]
fn test_schema_version_is_one() { fn test_schema_version_is_one() {
assert_eq!(LLM_PLAN_SCHEMA_VERSION, 1); assert_eq!(LLM_PLAN_SCHEMA_VERSION, 1);
@ -1218,6 +1576,13 @@ mod tests {
assert_eq!(h.len(), 64); assert_eq!(h.len(), 64);
} }
/// Run with: cargo test golden_traces_v2_schema_hash -- --nocapture
#[test]
#[ignore]
fn golden_traces_v2_schema_hash() {
eprintln!("v2 schema_hash: {}", schema_hash_for_version(2));
}
#[test] #[test]
fn test_validate_actions_empty() { fn test_validate_actions_empty() {
assert!(validate_actions(&[]).is_ok()); assert!(validate_actions(&[]).is_ok());
@ -1229,6 +1594,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "README.md".to_string(), path: "README.md".to_string(),
content: Some("# Project".to_string()), content: Some("# Project".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_ok()); assert!(validate_actions(&actions).is_ok());
} }
@ -1239,6 +1606,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "../etc/passwd".to_string(), path: "../etc/passwd".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1249,6 +1618,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "/etc/passwd".to_string(), path: "/etc/passwd".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1259,6 +1630,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "a/..".to_string(), path: "a/..".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1269,6 +1642,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "C:/foo/bar".to_string(), path: "C:/foo/bar".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1279,6 +1654,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "//server/share/file".to_string(), path: "//server/share/file".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1289,6 +1666,8 @@ mod tests {
kind: ActionKind::CreateDir, kind: ActionKind::CreateDir,
path: ".".to_string(), path: ".".to_string(),
content: None, content: None,
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1299,6 +1678,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "a/./b".to_string(), path: "a/./b".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1309,6 +1690,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "./src/main.rs".to_string(), path: "./src/main.rs".to_string(),
content: Some("fn main() {}".to_string()), content: Some("fn main() {}".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_ok()); assert!(validate_actions(&actions).is_ok());
} }
@ -1320,11 +1703,15 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "foo.txt".to_string(), path: "foo.txt".to_string(),
content: Some("a".to_string()), content: Some("a".to_string()),
patch: None,
base_sha256: None,
}, },
Action { Action {
kind: ActionKind::UpdateFile, kind: ActionKind::UpdateFile,
path: "foo.txt".to_string(), path: "foo.txt".to_string(),
content: Some("b".to_string()), content: Some("b".to_string()),
patch: None,
base_sha256: None,
}, },
]; ];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
@ -1337,11 +1724,15 @@ mod tests {
kind: ActionKind::DeleteFile, kind: ActionKind::DeleteFile,
path: "foo.txt".to_string(), path: "foo.txt".to_string(),
content: None, content: None,
patch: None,
base_sha256: None,
}, },
Action { Action {
kind: ActionKind::UpdateFile, kind: ActionKind::UpdateFile,
path: "foo.txt".to_string(), path: "foo.txt".to_string(),
content: Some("b".to_string()), content: Some("b".to_string()),
patch: None,
base_sha256: None,
}, },
]; ];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
@ -1355,6 +1746,13 @@ mod tests {
assert!(paths.contains("README.md")); assert!(paths.contains("README.md"));
} }
#[test]
fn test_extract_files_from_plan_context_v2_sha256() {
let ctx = "FILE[src/parser.py] (sha256=7f3f2a0c9f8b1a0c9b4c0f9e3d8a4b2d8c9e7f1a0b3c4d5e6f7a8b9c0d1e2f3a):\n1|def parse";
let paths = extract_files_read_from_plan_context(ctx);
assert!(paths.contains("src/parser.py"));
}
#[test] #[test]
fn test_validate_update_without_base_ok() { fn test_validate_update_without_base_ok() {
let ctx = "FILE[foo.txt]:\nold\n\n=== bar.txt ===\ncontent\n"; let ctx = "FILE[foo.txt]:\nold\n\n=== bar.txt ===\ncontent\n";
@ -1363,11 +1761,15 @@ mod tests {
kind: ActionKind::UpdateFile, kind: ActionKind::UpdateFile,
path: "foo.txt".to_string(), path: "foo.txt".to_string(),
content: Some("new".to_string()), content: Some("new".to_string()),
patch: None,
base_sha256: None,
}, },
Action { Action {
kind: ActionKind::UpdateFile, kind: ActionKind::UpdateFile,
path: "bar.txt".to_string(), path: "bar.txt".to_string(),
content: Some("updated".to_string()), content: Some("updated".to_string()),
patch: None,
base_sha256: None,
}, },
]; ];
assert!(validate_update_without_base(&actions, Some(ctx)).is_ok()); assert!(validate_update_without_base(&actions, Some(ctx)).is_ok());
@ -1380,6 +1782,8 @@ mod tests {
kind: ActionKind::UpdateFile, kind: ActionKind::UpdateFile,
path: "unknown.txt".to_string(), path: "unknown.txt".to_string(),
content: Some("new".to_string()), content: Some("new".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_update_without_base(&actions, Some(ctx)).is_err()); assert!(validate_update_without_base(&actions, Some(ctx)).is_err());
} }
@ -1390,6 +1794,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "~/etc/passwd".to_string(), path: "~/etc/passwd".to_string(),
content: Some("x".to_string()), content: Some("x".to_string()),
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1400,6 +1806,8 @@ mod tests {
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "README.md".to_string(), path: "README.md".to_string(),
content: None, content: None,
patch: None,
base_sha256: None,
}]; }];
assert!(validate_actions(&actions).is_err()); assert!(validate_actions(&actions).is_err());
} }
@ -1423,6 +1831,103 @@ mod tests {
assert_eq!(actions[0].path, "src"); assert_eq!(actions[0].path, "src");
} }
#[test]
fn test_v2_update_existing_forbidden() {
let dir = tempfile::tempdir().unwrap();
let root = dir.path();
fs::create_dir_all(root.join("src")).unwrap();
fs::write(root.join("src/main.rs"), "fn main() {}\n").unwrap();
std::env::set_var("PAPAYU_PROTOCOL_VERSION", "2");
let actions = vec![Action {
kind: ActionKind::UpdateFile,
path: "src/main.rs".to_string(),
content: Some("fn main() { println!(\"x\"); }\n".to_string()),
patch: None,
base_sha256: None,
}];
let r = validate_v2_update_existing_forbidden(root, &actions);
std::env::remove_var("PAPAYU_PROTOCOL_VERSION");
assert!(r.is_err());
let e = r.unwrap_err();
assert!(e.contains("ERR_V2_UPDATE_EXISTING_FORBIDDEN"));
assert!(e.contains("PATCH_FILE"));
}
#[test]
fn test_build_repair_prompt_injects_sha256() {
let sha = "a".repeat(64);
std::env::set_var("PAPAYU_PROTOCOL_VERSION", "2");
let ctx = format!("FILE[src/main.rs] (sha256={}):\nfn main() {{}}\n", sha);
let validated = serde_json::json!({
"actions": [{
"kind": "PATCH_FILE",
"path": "src/main.rs",
"base_sha256": "wrong",
"patch": "--- a/foo\n+++ b/foo\n@@ -1,1 +1,2 @@\nold\n+new"
}]
});
let result = build_v2_patch_repair_prompt_with_sha(&ctx, &validated);
std::env::remove_var("PAPAYU_PROTOCOL_VERSION");
assert!(result.is_some());
let (p, paths) = result.unwrap();
assert!(p.contains("base_sha256"));
assert!(p.contains(&sha));
assert!(p.contains("src/main.rs"));
assert_eq!(paths, vec!["src/main.rs"]);
}
#[test]
fn test_repair_prompt_fallback_when_sha_missing() {
std::env::set_var("PAPAYU_PROTOCOL_VERSION", "2");
let ctx = "FILE[src/main.rs]:\nfn main() {}\n";
let validated = serde_json::json!({
"actions": [{
"kind": "PATCH_FILE",
"path": "src/main.rs",
"base_sha256": "wrong",
"patch": "--- a/foo\n+++ b/foo\n@@ -1,1 +1,2 @@\nold\n+new"
}]
});
let result = build_v2_patch_repair_prompt_with_sha(ctx, &validated);
std::env::remove_var("PAPAYU_PROTOCOL_VERSION");
assert!(result.is_none());
}
#[test]
fn test_repair_prompt_not_generated_when_base_matches() {
let sha = "b".repeat(64);
std::env::set_var("PAPAYU_PROTOCOL_VERSION", "2");
let ctx = format!("FILE[src/foo.rs] (sha256={}):\ncontent\n", sha);
let validated = serde_json::json!({
"actions": [{
"kind": "PATCH_FILE",
"path": "src/foo.rs",
"base_sha256": sha,
"patch": "--- a/foo\n+++ b/foo\n@@ -1,1 +1,2 @@\ncontent\n+more"
}]
});
let result = build_v2_patch_repair_prompt_with_sha(&ctx, &validated);
std::env::remove_var("PAPAYU_PROTOCOL_VERSION");
assert!(result.is_none());
}
#[test]
fn test_parse_actions_from_json_patch_file() {
let sha = "a".repeat(64);
let actions_str = format!(
r#"[{{"kind":"PATCH_FILE","path":"src/main.rs","patch":"--- a/foo\n+++ b/foo\n@@ -1,1 +1,2 @@\nold\n+new","base_sha256":"{}"}}]"#,
sha
);
let actions = parse_actions_from_json(&actions_str).unwrap();
assert_eq!(actions.len(), 1);
assert_eq!(actions[0].kind, ActionKind::PatchFile);
assert_eq!(actions[0].path, "src/main.rs");
assert!(actions[0].patch.is_some());
assert_eq!(actions[0].base_sha256.as_deref(), Some(sha.as_str()));
}
#[test] #[test]
fn golden_traces_v1_validate() { fn golden_traces_v1_validate() {
let dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("../../docs/golden_traces/v1"); let dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("../../docs/golden_traces/v1");
@ -1515,4 +2020,88 @@ mod tests {
} }
} }
} }
#[test]
fn golden_traces_v2_validate() {
let dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("../../docs/golden_traces/v2");
if !dir.exists() {
return;
}
let expected_schema_hash = schema_hash_for_version(2);
let v2_schema = compiled_schema_for_version(2).expect("v2 schema must compile");
for entry in fs::read_dir(&dir).unwrap() {
let path = entry.unwrap().path();
if path.extension().and_then(|s| s.to_str()) != Some("json") {
continue;
}
let name = path.file_name().unwrap().to_string_lossy();
let s = fs::read_to_string(&path).unwrap_or_else(|_| panic!("read {}", name));
let v: serde_json::Value =
serde_json::from_str(&s).unwrap_or_else(|e| panic!("{}: json {}", name, e));
assert_eq!(
v.get("protocol")
.and_then(|p| p.get("schema_version"))
.and_then(|x| x.as_u64()),
Some(2),
"{}: schema_version must be 2",
name
);
let sh = v
.get("protocol")
.and_then(|p| p.get("schema_hash"))
.and_then(|x| x.as_str())
.unwrap_or("");
assert_eq!(sh, expected_schema_hash, "{}: schema_hash", name);
let validated = v
.get("result")
.and_then(|r| r.get("validated_json"))
.cloned()
.unwrap_or(serde_json::Value::Null);
if validated.is_null() {
continue;
}
v2_schema
.validate(&validated)
.map_err(|errs| {
let msgs: Vec<String> = errs.map(|e| e.to_string()).collect();
format!("{}: v2 schema validation: {}", name, msgs.join("; "))
})
.unwrap();
let validated_str = serde_json::to_string(&validated).unwrap();
let parsed = super::parse_plan_response(&validated_str)
.unwrap_or_else(|e| panic!("{}: parse validated_json: {}", name, e));
if v.get("result")
.and_then(|r| r.get("validation_outcome"))
.and_then(|x| x.as_str())
== Some("ok")
{
assert!(
validate_actions(&parsed.actions).is_ok(),
"{}: validate_actions",
name
);
}
let mode = v
.get("request")
.and_then(|r| r.get("mode"))
.and_then(|x| x.as_str())
.unwrap_or("");
if mode == "apply" && parsed.actions.is_empty() {
let summary = validated
.get("summary")
.and_then(|x| x.as_str())
.unwrap_or("");
assert!(
summary.starts_with("NO_CHANGES:"),
"{}: apply with empty actions requires NO_CHANGES: prefix in summary",
name
);
}
}
}
} }

View File

@ -19,6 +19,7 @@ mod trends;
mod undo_last; mod undo_last;
mod undo_last_tx; mod undo_last_tx;
mod undo_status; mod undo_status;
mod weekly_report;
pub use agentic_run::agentic_run; pub use agentic_run::agentic_run;
pub use get_project_profile::get_project_profile; pub use get_project_profile::get_project_profile;
@ -38,3 +39,4 @@ pub use undo_last::{get_undo_redo_state_cmd, undo_available, undo_last};
pub use undo_last_tx::undo_last_tx; pub use undo_last_tx::undo_last_tx;
pub use undo_status::undo_status; pub use undo_status::undo_status;
pub use settings_export::{export_settings, import_settings}; pub use settings_export::{export_settings, import_settings};
pub use weekly_report::{analyze_weekly_reports, save_report_to_file, WeeklyReportResult};

View File

@ -1,3 +1,4 @@
use crate::patch::{apply_unified_diff_to_text, looks_like_unified_diff, sha256_hex};
use crate::tx::safe_join; use crate::tx::safe_join;
use crate::types::{ActionKind, ApplyPayload, DiffItem, PreviewResult}; use crate::types::{ActionKind, ApplyPayload, DiffItem, PreviewResult};
use std::fs; use std::fs;
@ -16,6 +17,8 @@ pub fn preview_actions(payload: ApplyPayload) -> Result<PreviewResult, String> {
old_content: Some("(blocked)".to_string()), old_content: Some("(blocked)".to_string()),
new_content: Some("(blocked)".to_string()), new_content: Some("(blocked)".to_string()),
summary: Some("BLOCKED: protected or non-text file".to_string()), summary: Some("BLOCKED: protected or non-text file".to_string()),
bytes_before: None,
bytes_after: None,
}); });
continue; continue;
} }
@ -26,6 +29,8 @@ pub fn preview_actions(payload: ApplyPayload) -> Result<PreviewResult, String> {
old_content: None, old_content: None,
new_content: a.content.clone(), new_content: a.content.clone(),
summary: None, summary: None,
bytes_before: None,
bytes_after: None,
}, },
ActionKind::CreateDir => DiffItem { ActionKind::CreateDir => DiffItem {
kind: "mkdir".to_string(), kind: "mkdir".to_string(),
@ -33,15 +38,31 @@ pub fn preview_actions(payload: ApplyPayload) -> Result<PreviewResult, String> {
old_content: None, old_content: None,
new_content: None, new_content: None,
summary: None, summary: None,
bytes_before: None,
bytes_after: None,
}, },
ActionKind::UpdateFile => { ActionKind::UpdateFile => {
let old = read_text_if_exists(root, &a.path); let old = read_text_if_exists(root, &a.path);
DiffItem { DiffItem {
kind: "update".to_string(), kind: "update".to_string(),
path: a.path.clone(), path: a.path.clone(),
old_content: old, old_content: old.clone(),
new_content: a.content.clone(), new_content: a.content.clone(),
summary: None, summary: None,
bytes_before: old.as_ref().map(|s| s.len()),
bytes_after: a.content.as_ref().map(|s| s.len()),
}
}
ActionKind::PatchFile => {
let (diff, summary, bytes_before, bytes_after) = preview_patch_file(root, &a.path, a.patch.as_deref().unwrap_or(""), a.base_sha256.as_deref().unwrap_or(""));
DiffItem {
kind: "patch".to_string(),
path: a.path.clone(),
old_content: None,
new_content: Some(diff),
summary,
bytes_before,
bytes_after,
} }
} }
ActionKind::DeleteFile => { ActionKind::DeleteFile => {
@ -49,9 +70,11 @@ pub fn preview_actions(payload: ApplyPayload) -> Result<PreviewResult, String> {
DiffItem { DiffItem {
kind: "delete".to_string(), kind: "delete".to_string(),
path: a.path.clone(), path: a.path.clone(),
old_content: old, old_content: old.clone(),
new_content: None, new_content: None,
summary: None, summary: None,
bytes_before: old.as_ref().map(|s| s.len()),
bytes_after: None,
} }
} }
ActionKind::DeleteDir => DiffItem { ActionKind::DeleteDir => DiffItem {
@ -60,6 +83,8 @@ pub fn preview_actions(payload: ApplyPayload) -> Result<PreviewResult, String> {
old_content: None, old_content: None,
new_content: None, new_content: None,
summary: None, summary: None,
bytes_before: None,
bytes_after: None,
}, },
}; };
diffs.push(item); diffs.push(item);
@ -74,6 +99,42 @@ pub fn preview_actions(payload: ApplyPayload) -> Result<PreviewResult, String> {
Ok(PreviewResult { diffs, summary }) Ok(PreviewResult { diffs, summary })
} }
/// Returns (diff, summary, bytes_before, bytes_after).
fn preview_patch_file(
root: &std::path::Path,
rel: &str,
patch_text: &str,
base_sha256: &str,
) -> (String, Option<String>, Option<usize>, Option<usize>) {
if !looks_like_unified_diff(patch_text) {
return (patch_text.to_string(), Some("ERR_PATCH_NOT_UNIFIED: patch is not unified diff".into()), None, None);
}
let p = match safe_join(root, rel) {
Ok(p) => p,
Err(_) => return (patch_text.to_string(), Some("ERR_INVALID_PATH".into()), None, None),
};
if !p.is_file() {
return (patch_text.to_string(), Some("ERR_BASE_MISMATCH: file not found".into()), None, None);
}
let old_bytes = match fs::read(&p) {
Ok(b) => b,
Err(_) => return (patch_text.to_string(), Some("ERR_IO: cannot read file".into()), None, None),
};
let old_sha = sha256_hex(&old_bytes);
if old_sha != base_sha256 {
return (patch_text.to_string(), Some(format!("ERR_BASE_MISMATCH: have {}, want {}", old_sha, base_sha256)), None, None);
}
let old_text = match String::from_utf8(old_bytes) {
Ok(s) => s,
Err(_) => return (patch_text.to_string(), Some("ERR_NON_UTF8_FILE: PATCH_FILE требует UTF-8. Файл не UTF-8.".into()), None, None),
};
let bytes_before = old_text.len();
match apply_unified_diff_to_text(&old_text, patch_text) {
Ok(new_text) => (patch_text.to_string(), None, Some(bytes_before), Some(new_text.len())),
Err(_) => (patch_text.to_string(), Some("ERR_PATCH_APPLY_FAILED: could not apply patch".into()), None, None),
}
}
fn read_text_if_exists(root: &std::path::Path, rel: &str) -> Option<String> { fn read_text_if_exists(root: &std::path::Path, rel: &str) -> Option<String> {
let p = safe_join(root, rel).ok()?; let p = safe_join(root, rel).ok()?;
if !p.is_file() { if !p.is_file() {
@ -90,13 +151,14 @@ fn read_text_if_exists(root: &std::path::Path, rel: &str) -> Option<String> {
fn summarize(diffs: &[DiffItem]) -> String { fn summarize(diffs: &[DiffItem]) -> String {
let create = diffs.iter().filter(|d| d.kind == "create").count(); let create = diffs.iter().filter(|d| d.kind == "create").count();
let update = diffs.iter().filter(|d| d.kind == "update").count(); let update = diffs.iter().filter(|d| d.kind == "update").count();
let patch = diffs.iter().filter(|d| d.kind == "patch").count();
let delete = diffs.iter().filter(|d| d.kind == "delete").count(); let delete = diffs.iter().filter(|d| d.kind == "delete").count();
let mkdir = diffs.iter().filter(|d| d.kind == "mkdir").count(); let mkdir = diffs.iter().filter(|d| d.kind == "mkdir").count();
let rmdir = diffs.iter().filter(|d| d.kind == "rmdir").count(); let rmdir = diffs.iter().filter(|d| d.kind == "rmdir").count();
let blocked = diffs.iter().filter(|d| d.kind == "blocked").count(); let blocked = diffs.iter().filter(|d| d.kind == "blocked").count();
let mut s = format!( let mut s = format!(
"Создать: {}, изменить: {}, удалить: {}, mkdir: {}, rmdir: {}", "Создать: {}, изменить: {}, patch: {}, удалить: {}, mkdir: {}, rmdir: {}",
create, update, delete, mkdir, rmdir create, update, patch, delete, mkdir, rmdir
); );
if blocked > 0 { if blocked > 0 {
s.push_str(&format!(", заблокировано: {}", blocked)); s.push_str(&format!(", заблокировано: {}", blocked));

View File

@ -4,6 +4,7 @@
use std::path::Path; use std::path::Path;
use crate::online_research;
use crate::types::{Action, ActionKind, AgentPlan}; use crate::types::{Action, ActionKind, AgentPlan};
use tauri::Manager; use tauri::Manager;
@ -27,6 +28,17 @@ fn has_license(root: &str) -> bool {
} }
/// Триггеры перехода Plan→Apply (пользователь подтвердил план). /// Триггеры перехода Plan→Apply (пользователь подтвердил план).
/// Извлекает префикс ошибки (ERR_XXX или LLM_REQUEST_TIMEOUT) из сообщения.
fn extract_error_code(msg: &str) -> &str {
if let Some(colon) = msg.find(':') {
let prefix = msg[..colon].trim();
if !prefix.is_empty() && prefix.chars().all(|c| c.is_ascii_alphanumeric() || c == '_') {
return prefix;
}
}
""
}
const APPLY_TRIGGERS: &[&str] = &[ const APPLY_TRIGGERS: &[&str] = &[
"ok", "ок", "apply", "применяй", "применить", "делай", "да", "yes", "go", "вперёд", "ok", "ок", "apply", "применяй", "применить", "делай", "да", "yes", "go", "вперёд",
]; ];
@ -41,6 +53,15 @@ pub async fn propose_actions(
trends_context: Option<String>, trends_context: Option<String>,
last_plan_json: Option<String>, last_plan_json: Option<String>,
last_context: Option<String>, last_context: Option<String>,
apply_error_code: Option<String>,
apply_error_validated_json: Option<String>,
apply_repair_attempt: Option<u32>,
apply_error_stage: Option<String>,
online_fallback_attempted: Option<bool>,
online_context_md: Option<String>,
online_context_sources: Option<Vec<String>>,
online_fallback_executed: Option<bool>,
online_fallback_reason: Option<String>,
) -> AgentPlan { ) -> AgentPlan {
let goal_trim = user_goal.trim(); let goal_trim = user_goal.trim();
let goal_lower = goal_trim.to_lowercase(); let goal_lower = goal_trim.to_lowercase();
@ -54,6 +75,9 @@ pub async fn propose_actions(
error_code: Some("PATH_NOT_FOUND".into()), error_code: Some("PATH_NOT_FOUND".into()),
plan_json: None, plan_json: None,
plan_context: None, plan_context: None,
protocol_version_used: None,
online_fallback_suggested: None,
online_context_used: None,
}; };
} }
@ -66,10 +90,13 @@ pub async fn propose_actions(
summary: String::new(), summary: String::new(),
actions: vec![], actions: vec![],
error: Some(format!("app data dir: {}", e)), error: Some(format!("app data dir: {}", e)),
error_code: Some("APP_DATA_DIR".into()), error_code: Some("APP_DATA_DIR".into()),
plan_json: None, plan_json: None,
plan_context: None, plan_context: None,
}; protocol_version_used: None,
online_fallback_suggested: None,
online_context_used: None,
};
} }
}; };
let user_prefs_path = app_data.join("papa-yu").join("preferences.json"); let user_prefs_path = app_data.join("papa-yu").join("preferences.json");
@ -101,6 +128,24 @@ pub async fn propose_actions(
let last_plan_ref = last_plan_json.as_deref(); let last_plan_ref = last_plan_json.as_deref();
let last_ctx_ref = last_context.as_deref(); let last_ctx_ref = last_context.as_deref();
let apply_error = apply_error_code.as_deref().and_then(|code| {
apply_error_validated_json.as_deref().map(|json| (code, json))
});
let force_protocol = {
let code = apply_error_code.as_deref().unwrap_or("");
let repair_attempt = apply_repair_attempt.unwrap_or(0);
if llm_planner::is_protocol_fallback_applicable(code, repair_attempt) {
let stage = apply_error_stage.as_deref().unwrap_or("apply");
eprintln!("[trace] PROTOCOL_FALLBACK from=v2 to=v1 reason={} stage={}", code, stage);
Some(1u32)
} else {
None
}
};
let apply_error_stage_ref = apply_error_stage.as_deref();
let online_md_ref = online_context_md.as_deref();
let online_sources_ref: Option<&[String]> = online_context_sources.as_deref();
let online_reason_ref = online_fallback_reason.as_deref();
return match llm_planner::plan( return match llm_planner::plan(
&user_prefs_path, &user_prefs_path,
&project_prefs_path, &project_prefs_path,
@ -113,19 +158,42 @@ pub async fn propose_actions(
output_format_override, output_format_override,
last_plan_ref, last_plan_ref,
last_ctx_ref, last_ctx_ref,
apply_error,
force_protocol,
apply_error_stage_ref,
apply_repair_attempt,
online_md_ref,
online_sources_ref,
online_fallback_executed,
online_reason_ref,
) )
.await .await
{ {
Ok(plan) => plan, Ok(plan) => plan,
Err(e) => AgentPlan { Err(e) => {
ok: false, let error_code_str = extract_error_code(&e).to_string();
summary: String::new(), let online_suggested = online_research::maybe_online_fallback(
actions: vec![], Some(&e),
error: Some(e), online_research::is_online_research_enabled(),
error_code: Some("LLM_ERROR".into()), online_fallback_attempted.unwrap_or(false),
plan_json: None, )
plan_context: None, .then_some(goal_trim.to_string());
}, if online_suggested.is_some() {
eprintln!("[trace] ONLINE_FALLBACK_SUGGESTED error_code={} query_len={}", error_code_str, goal_trim.len());
}
AgentPlan {
ok: false,
summary: String::new(),
actions: vec![],
error: Some(e),
error_code: Some(if error_code_str.is_empty() { "LLM_ERROR".into() } else { error_code_str }),
plan_json: None,
plan_context: None,
protocol_version_used: None,
online_fallback_suggested: online_suggested,
online_context_used: None,
}
}
}; };
} }
@ -149,6 +217,9 @@ pub async fn propose_actions(
error_code: None, error_code: None,
plan_json: None, plan_json: None,
plan_context: None, plan_context: None,
protocol_version_used: None,
online_fallback_suggested: None,
online_context_used: None,
}; };
} }
@ -172,6 +243,8 @@ pub async fn propose_actions(
"# PAPA YU Project\n\n## Цель\n{}\n\n## Как запустить\n- (добавить)\n\n## Структура\n- (добавить)\n", "# PAPA YU Project\n\n## Цель\n{}\n\n## Как запустить\n- (добавить)\n\n## Структура\n- (добавить)\n",
user_goal user_goal
)), )),
patch: None,
base_sha256: None,
}); });
summary.push("Добавлю README.md".into()); summary.push("Добавлю README.md".into());
} }
@ -183,6 +256,8 @@ pub async fn propose_actions(
content: Some( content: Some(
"node_modules/\ndist/\nbuild/\n.DS_Store\n.env\n.env.*\ncoverage/\n.target/\n".into(), "node_modules/\ndist/\nbuild/\n.DS_Store\n.env\n.env.*\ncoverage/\n.target/\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
summary.push("Добавлю .gitignore".into()); summary.push("Добавлю .gitignore".into());
} }
@ -202,6 +277,8 @@ pub async fn propose_actions(
content: Some( content: Some(
"\"\"\"Точка входа. Запуск: python main.py\"\"\"\n\ndef main() -> None:\n print(\"Hello\")\n\n\nif __name__ == \"__main__\":\n main()\n".into(), "\"\"\"Точка входа. Запуск: python main.py\"\"\"\n\ndef main() -> None:\n print(\"Hello\")\n\n\nif __name__ == \"__main__\":\n main()\n".into(),
), ),
patch: None,
base_sha256: None,
}); });
summary.push("Добавлю main.py (скелет)".into()); summary.push("Добавлю main.py (скелет)".into());
} }
@ -212,6 +289,8 @@ pub async fn propose_actions(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: "LICENSE".into(), path: "LICENSE".into(),
content: Some("UNLICENSED\n".into()), content: Some("UNLICENSED\n".into()),
patch: None,
base_sha256: None,
}); });
summary.push("Добавлю LICENSE (пометка UNLICENSED)".into()); summary.push("Добавлю LICENSE (пометка UNLICENSED)".into());
} }
@ -221,6 +300,8 @@ pub async fn propose_actions(
kind: ActionKind::CreateFile, kind: ActionKind::CreateFile,
path: ".env.example".into(), path: ".env.example".into(),
content: Some("VITE_API_URL=\n# пример, без секретов\n".into()), content: Some("VITE_API_URL=\n# пример, без секретов\n".into()),
patch: None,
base_sha256: None,
}); });
summary.push("Добавлю .env.example (без секретов)".into()); summary.push("Добавлю .env.example (без секретов)".into());
} }
@ -234,6 +315,9 @@ pub async fn propose_actions(
error_code: None, error_code: None,
plan_json: None, plan_json: None,
plan_context: None, plan_context: None,
protocol_version_used: None,
online_fallback_suggested: None,
online_context_used: None,
}; };
} }
@ -245,5 +329,8 @@ pub async fn propose_actions(
error_code: None, error_code: None,
plan_json: None, plan_json: None,
plan_context: None, plan_context: None,
protocol_version_used: None,
online_fallback_suggested: None,
online_context_used: None,
} }
} }

View File

@ -0,0 +1,982 @@
//! Weekly Report Analyzer: агрегация трасс и генерация отчёта через LLM.
use jsonschema::JSONSchema;
use serde::{Deserialize, Serialize};
use std::collections::{BTreeMap, HashMap};
use std::fs;
use std::path::Path;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WeeklyStatsBundle {
pub period_from: String,
pub period_to: String,
pub apply_count: u64,
pub fallback_count: u64,
pub fallback_rate: f64,
pub fallback_by_reason: BTreeMap<String, u64>,
pub fallback_by_group: BTreeMap<String, u64>,
pub fallback_excluding_non_utf8_rate: f64,
pub repair_attempt_rate: f64,
pub repair_success_rate: f64,
pub repair_to_fallback_rate: f64,
pub sha_injection_rate: f64,
pub top_sha_injected_paths: Vec<(String, u64)>,
pub top_error_codes: Vec<(String, u64)>,
pub error_codes_by_group: BTreeMap<String, u64>,
pub new_error_codes: Vec<(String, u64)>,
pub context: ContextAgg,
pub cache: CacheAgg,
#[serde(skip_serializing_if = "Option::is_none")]
pub previous: Option<PreviousPeriodStats>,
#[serde(skip_serializing_if = "Option::is_none")]
pub deltas: Option<DeltaStats>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PreviousPeriodStats {
pub period_from: String,
pub period_to: String,
pub apply_count: u64,
pub fallback_count: u64,
pub fallback_rate: f64,
pub fallback_excluding_non_utf8_rate: f64,
pub repair_attempt_rate: f64,
pub repair_success_rate: f64,
pub repair_to_fallback_rate: f64,
pub sha_injection_rate: f64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DeltaStats {
pub delta_apply_count: i64,
pub delta_fallback_count: i64,
pub delta_fallback_rate: f64,
pub delta_fallback_excluding_non_utf8_rate: f64,
pub delta_repair_attempt_rate: f64,
pub delta_repair_success_rate: f64,
pub delta_repair_to_fallback_rate: f64,
pub delta_sha_injection_rate: f64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ContextAgg {
pub avg_total_chars: f64,
pub p95_total_chars: u64,
pub avg_files_count: f64,
pub avg_dropped_files: f64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CacheAgg {
pub avg_hit_rate: f64,
pub env_hit_rate: f64,
pub read_hit_rate: f64,
pub search_hit_rate: f64,
pub logs_hit_rate: f64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WeeklyReportResult {
pub ok: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub stats_bundle: Option<WeeklyStatsBundle>,
#[serde(skip_serializing_if = "Option::is_none")]
pub llm_report: Option<serde_json::Value>,
#[serde(skip_serializing_if = "Option::is_none")]
pub report_md: Option<String>,
}
/// Нормализует error_code в группу для breakdown.
fn group_error_code(code: &str) -> &'static str {
let code = code.to_uppercase();
if code.contains("SCHEMA") || code.contains("JSON_PARSE") || code.contains("JSON_EXTRACT") || code.contains("VALIDATION") {
"LLM_FORMAT"
} else if code.contains("PATCH") || code.contains("BASE_MISMATCH") || code.contains("BASE_SHA256") {
"PATCH"
} else if code.contains("PATH") || code.contains("CONFLICT") || code.contains("PROTECTED") || code.contains("UPDATE_WITHOUT_BASE") {
"SAFETY"
} else if code.contains("NON_UTF8") || code.contains("UTF8") || code.contains("ENCODING") {
"ENCODING"
} else if code.contains("UPDATE_EXISTING") || code.contains("UPDATE_FILE") {
"V2_UPDATE"
} else {
"OTHER"
}
}
/// Извлекает базовый ERR_ код (до двоеточия).
fn extract_base_error_code(s: &str) -> Option<String> {
let s = s.trim();
if s.starts_with("ERR_") {
let base = s.split(':').next().unwrap_or(s).trim().to_string();
if !base.is_empty() {
return Some(base);
}
}
None
}
/// Собирает error codes из golden traces (result.error_code). Ищет в project_path/docs/golden_traces и в родительских каталогах (для papa-yu repo).
fn golden_trace_error_codes(project_path: &Path) -> std::collections::HashSet<String> {
use std::collections::HashSet;
let mut codes = HashSet::new();
let mut search_dirs = vec![project_path.to_path_buf()];
if let Some(parent) = project_path.parent() {
search_dirs.push(parent.to_path_buf());
}
for base in search_dirs {
for subdir in ["v1", "v2"] {
let dir = base.join("docs").join("golden_traces").join(subdir);
if !dir.exists() {
continue;
}
let Ok(entries) = fs::read_dir(&dir) else { continue };
for entry in entries.flatten() {
let path = entry.path();
if path.extension().and_then(|e| e.to_str()) != Some("json") {
continue;
}
let Ok(content) = fs::read_to_string(&path) else { continue };
let Ok(val) = serde_json::from_str::<serde_json::Value>(&content) else { continue };
if let Some(ec) = val.get("result").and_then(|r| r.get("error_code")).and_then(|v| v.as_str()) {
if let Some(b) = extract_base_error_code(ec) {
codes.insert(b);
}
}
}
}
}
codes
}
fn trace_to_sample(trace: &serde_json::Value) -> serde_json::Value {
let error_code = trace
.get("error_code")
.and_then(|v| v.as_str())
.or_else(|| trace.get("error").and_then(|v| v.as_str()));
serde_json::json!({
"event": trace.get("event"),
"error_code": error_code,
"protocol_attempts": trace.get("protocol_attempts"),
"protocol_fallback_reason": trace.get("protocol_fallback_reason"),
"protocol_repair_attempt": trace.get("protocol_repair_attempt"),
"repair_injected_paths": trace.get("repair_injected_paths"),
"actions_count": trace.get("actions_count"),
"context_stats": trace.get("context_stats"),
"cache_stats": trace.get("cache_stats"),
})
}
/// Собирает трассы из .papa-yu/traces за период (по mtime файла).
pub fn collect_traces(
project_path: &Path,
from_secs: u64,
to_secs: u64,
) -> Result<Vec<(u64, serde_json::Value)>, String> {
let traces_dir = project_path.join(".papa-yu").join("traces");
if !traces_dir.exists() {
return Ok(vec![]);
}
let mut out = Vec::new();
for entry in fs::read_dir(&traces_dir).map_err(|e| format!("read_dir: {}", e))? {
let entry = entry.map_err(|e| format!("read_dir entry: {}", e))?;
let path = entry.path();
if path.extension().and_then(|e| e.to_str()) != Some("json") {
continue;
}
let meta = entry.metadata().map_err(|e| format!("metadata: {}", e))?;
let mtime = meta
.modified()
.ok()
.and_then(|t| t.duration_since(UNIX_EPOCH).ok())
.map(|d| d.as_secs())
.unwrap_or(0);
if mtime < from_secs || mtime > to_secs {
continue;
}
let content = fs::read_to_string(&path).map_err(|e| format!("read {}: {}", path.display(), e))?;
let trace: serde_json::Value = serde_json::from_str(&content).map_err(|e| format!("parse {}: {}", path.display(), e))?;
out.push((mtime, trace));
}
Ok(out)
}
/// Агрегирует трассы в WeeklyStatsBundle. Без previous/deltas/new_error_codes — их добавляет analyze_weekly_reports.
pub fn aggregate_weekly(
traces: &[(u64, serde_json::Value)],
period_from: &str,
period_to: &str,
) -> WeeklyStatsBundle {
let mut apply_count: u64 = 0;
let mut fallback_count: u64 = 0;
let mut repair_attempt_count: u64 = 0;
let mut repair_to_fallback_count: u64 = 0;
let mut fallback_by_reason: BTreeMap<String, u64> = BTreeMap::new();
let mut fallback_non_utf8: u64 = 0;
let mut sha_injection_count: u64 = 0;
let mut path_counts: HashMap<String, u64> = HashMap::new();
let mut error_code_counts: HashMap<String, u64> = HashMap::new();
let mut context_total_chars: Vec<u64> = Vec::new();
let mut context_files_count: Vec<u64> = Vec::new();
let mut context_dropped: Vec<u64> = Vec::new();
let mut cache_hit_rates: Vec<f64> = Vec::new();
let mut cache_env_hits: u64 = 0;
let mut cache_env_misses: u64 = 0;
let mut cache_read_hits: u64 = 0;
let mut cache_read_misses: u64 = 0;
let mut cache_search_hits: u64 = 0;
let mut cache_search_misses: u64 = 0;
let mut cache_logs_hits: u64 = 0;
let mut cache_logs_misses: u64 = 0;
for (_, trace) in traces {
let event = trace.get("event").and_then(|v| v.as_str());
if event != Some("LLM_PLAN_OK") {
if event.is_some() {
let code = trace
.get("error_code")
.and_then(|v| v.as_str())
.or_else(|| trace.get("error").and_then(|v| v.as_str()));
if let Some(c) = code {
*error_code_counts.entry(c.to_string()).or_insert(0) += 1;
}
}
continue;
}
apply_count += 1;
if trace.get("protocol_repair_attempt").and_then(|v| v.as_u64()) == Some(0) {
repair_attempt_count += 1;
}
if trace.get("protocol_repair_attempt").and_then(|v| v.as_u64()) == Some(1) {
let fallback_attempted = trace.get("protocol_fallback_attempted").and_then(|v| v.as_bool()).unwrap_or(false);
let reason = trace.get("protocol_fallback_reason").and_then(|v| v.as_str()).unwrap_or("");
if !fallback_attempted || reason.is_empty() {
eprintln!(
"[trace] WEEKLY_REPORT_INVARIANT_VIOLATION protocol_repair_attempt=1 expected protocol_fallback_attempted=true and protocol_fallback_reason non-empty, got fallback_attempted={} reason_len={}",
fallback_attempted,
reason.len()
);
}
repair_to_fallback_count += 1;
}
if trace.get("protocol_fallback_attempted").and_then(|v| v.as_bool()).unwrap_or(false) {
fallback_count += 1;
let reason = trace
.get("protocol_fallback_reason")
.and_then(|v| v.as_str())
.unwrap_or("unknown")
.to_string();
*fallback_by_reason.entry(reason.clone()).or_insert(0) += 1;
if reason == "ERR_NON_UTF8_FILE" {
fallback_non_utf8 += 1;
}
}
if trace.get("repair_injected_sha256").and_then(|v| v.as_bool()).unwrap_or(false) {
sha_injection_count += 1;
if let Some(paths) = trace.get("repair_injected_paths").and_then(|v| v.as_array()) {
for p in paths {
if let Some(s) = p.as_str() {
*path_counts.entry(s.to_string()).or_insert(0) += 1;
}
}
}
}
if let Some(ctx) = trace.get("context_stats") {
if let Some(n) = ctx.get("context_total_chars").and_then(|v| v.as_u64()) {
context_total_chars.push(n);
}
if let Some(n) = ctx.get("context_files_count").and_then(|v| v.as_u64()) {
context_files_count.push(n);
}
if let Some(n) = ctx.get("context_files_dropped_count").and_then(|v| v.as_u64()) {
context_dropped.push(n);
}
}
if let Some(cache) = trace.get("cache_stats") {
if let Some(r) = cache.get("hit_rate").and_then(|v| v.as_f64()) {
cache_hit_rates.push(r);
}
cache_env_hits += cache.get("env_hits").and_then(|v| v.as_u64()).unwrap_or(0);
cache_env_misses += cache.get("env_misses").and_then(|v| v.as_u64()).unwrap_or(0);
cache_read_hits += cache.get("read_hits").and_then(|v| v.as_u64()).unwrap_or(0);
cache_read_misses += cache.get("read_misses").and_then(|v| v.as_u64()).unwrap_or(0);
cache_search_hits += cache.get("search_hits").and_then(|v| v.as_u64()).unwrap_or(0);
cache_search_misses += cache.get("search_misses").and_then(|v| v.as_u64()).unwrap_or(0);
cache_logs_hits += cache.get("logs_hits").and_then(|v| v.as_u64()).unwrap_or(0);
cache_logs_misses += cache.get("logs_misses").and_then(|v| v.as_u64()).unwrap_or(0);
}
}
let fallback_excluding_non_utf8 = fallback_count.saturating_sub(fallback_non_utf8);
let fallback_excluding_non_utf8_rate = if apply_count > 0 {
fallback_excluding_non_utf8 as f64 / apply_count as f64
} else {
0.0
};
let sha_injection_rate = if apply_count > 0 {
sha_injection_count as f64 / apply_count as f64
} else {
0.0
};
let mut top_paths: Vec<(String, u64)> = path_counts.into_iter().collect();
top_paths.sort_by(|a, b| b.1.cmp(&a.1));
top_paths.truncate(10);
let mut top_errors: Vec<(String, u64)> = error_code_counts.iter().map(|(k, v)| (k.clone(), *v)).collect();
top_errors.sort_by(|a, b| b.1.cmp(&a.1));
top_errors.truncate(10);
let mut error_codes_by_group: BTreeMap<String, u64> = BTreeMap::new();
for (code, count) in &error_code_counts {
let group = group_error_code(code).to_string();
*error_codes_by_group.entry(group).or_insert(0) += count;
}
for (reason, count) in &fallback_by_reason {
let group = group_error_code(reason).to_string();
*error_codes_by_group.entry(format!("fallback:{}", group)).or_insert(0) += count;
}
let mut fallback_by_group: BTreeMap<String, u64> = BTreeMap::new();
for (reason, count) in &fallback_by_reason {
let group = group_error_code(reason).to_string();
*fallback_by_group.entry(group).or_insert(0) += count;
}
let fallback_rate = if apply_count > 0 {
fallback_count as f64 / apply_count as f64
} else {
0.0
};
let repair_attempt_rate = if apply_count > 0 {
repair_attempt_count as f64 / apply_count as f64
} else {
0.0
};
let (repair_success_rate, repair_to_fallback_rate) = if repair_attempt_count > 0 {
let success_count = repair_attempt_count.saturating_sub(repair_to_fallback_count);
(
success_count as f64 / repair_attempt_count as f64,
repair_to_fallback_count as f64 / repair_attempt_count as f64,
)
} else {
(0.0, 0.0)
};
let avg_total_chars = if context_total_chars.is_empty() {
0.0
} else {
context_total_chars.iter().sum::<u64>() as f64 / context_total_chars.len() as f64
};
let mut sorted_chars = context_total_chars.clone();
sorted_chars.sort();
let p95_idx = (sorted_chars.len() as f64 * 0.95) as usize;
let p95_idx2 = p95_idx.min(sorted_chars.len().saturating_sub(1));
let p95_total_chars = *sorted_chars.get(p95_idx2).unwrap_or(&0);
let avg_files_count = if context_files_count.is_empty() {
0.0
} else {
context_files_count.iter().sum::<u64>() as f64 / context_files_count.len() as f64
};
let avg_dropped_files = if context_dropped.is_empty() {
0.0
} else {
context_dropped.iter().sum::<u64>() as f64 / context_dropped.len() as f64
};
let avg_hit_rate = if cache_hit_rates.is_empty() {
0.0
} else {
cache_hit_rates.iter().sum::<f64>() / cache_hit_rates.len() as f64
};
let env_total = cache_env_hits + cache_env_misses;
let env_hit_rate = if env_total > 0 {
cache_env_hits as f64 / env_total as f64
} else {
0.0
};
let read_total = cache_read_hits + cache_read_misses;
let read_hit_rate = if read_total > 0 {
cache_read_hits as f64 / read_total as f64
} else {
0.0
};
let search_total = cache_search_hits + cache_search_misses;
let search_hit_rate = if search_total > 0 {
cache_search_hits as f64 / search_total as f64
} else {
0.0
};
let logs_total = cache_logs_hits + cache_logs_misses;
let logs_hit_rate = if logs_total > 0 {
cache_logs_hits as f64 / logs_total as f64
} else {
0.0
};
WeeklyStatsBundle {
period_from: period_from.to_string(),
period_to: period_to.to_string(),
apply_count,
fallback_count,
fallback_rate,
fallback_by_reason,
fallback_by_group,
fallback_excluding_non_utf8_rate,
repair_attempt_rate,
repair_success_rate,
repair_to_fallback_rate,
sha_injection_rate,
top_sha_injected_paths: top_paths,
top_error_codes: top_errors,
error_codes_by_group,
new_error_codes: vec![],
context: ContextAgg {
avg_total_chars,
p95_total_chars,
avg_files_count,
avg_dropped_files,
},
cache: CacheAgg {
avg_hit_rate,
env_hit_rate,
read_hit_rate,
search_hit_rate,
logs_hit_rate,
},
previous: None,
deltas: None,
}
}
const WEEKLY_REPORT_SYSTEM_PROMPT: &str = r#"Ты анализируешь телеметрию работы AI-агента (протоколы v1/v2).
Твоя задача: составить еженедельный отчёт для оператора с выводами и конкретными предложениями улучшений.
Никаких патчей к проекту. Никаких actions. Только отчёт по схеме.
Пиши кратко, по делу. Предлагай меры, которые оператор реально может сделать.
ВАЖНО: Используй только предоставленные числа. Не выдумывай цифры. В evidence ссылайся на конкретные поля, например: fallback_rate_excluding_non_utf8_rate=0.012, fallback_by_reason.ERR_PATCH_APPLY_FAILED=3.
Рекомендуемые направления:
- Снизить ERR_PATCH_APPLY_FAILED: увеличить контекст hunks/прочитать больше строк вокруг
- Снизить UPDATE_FILE violations: усилить prompt или добавить ещё один repair шаблон
- Подкрутить контекст-диету/лимиты если p95 chars часто близко к лимиту
- Расширить protected paths если видны попытки трогать секреты
- Добавить golden trace сценарий если появляется новый тип фейла"#;
/// Вызывает LLM для генерации отчёта по агрегированным данным.
pub async fn call_llm_report(
stats: &WeeklyStatsBundle,
traces: &[(u64, serde_json::Value)],
) -> Result<serde_json::Value, String> {
let api_url = std::env::var("PAPAYU_LLM_API_URL").map_err(|_| "PAPAYU_LLM_API_URL not set")?;
let api_url = api_url.trim();
if api_url.is_empty() {
return Err("PAPAYU_LLM_API_URL is empty".into());
}
let model = std::env::var("PAPAYU_LLM_MODEL").unwrap_or_else(|_| "gpt-4o-mini".to_string());
let api_key = std::env::var("PAPAYU_LLM_API_KEY").ok();
let schema: serde_json::Value =
serde_json::from_str(include_str!("../../config/llm_weekly_report_schema.json"))
.map_err(|e| format!("schema parse: {}", e))?;
let stats_json = serde_json::to_string_pretty(stats).map_err(|e| format!("serialize stats: {}", e))?;
let samples: Vec<serde_json::Value> = traces
.iter()
.take(5)
.map(|(_, t)| trace_to_sample(t))
.collect();
let samples_json = serde_json::to_string_pretty(&samples).map_err(|e| format!("serialize samples: {}", e))?;
let user_content = format!(
"Агрегированная телеметрия за период {} — {}:\n\n```json\n{}\n```\n\nПримеры трасс (без raw_content):\n\n```json\n{}\n```",
stats.period_from,
stats.period_to,
stats_json,
samples_json
);
let response_format = serde_json::json!({
"type": "json_schema",
"json_schema": {
"name": "weekly_report",
"schema": schema,
"strict": true
}
});
let body = serde_json::json!({
"model": model.trim(),
"messages": [
{ "role": "system", "content": WEEKLY_REPORT_SYSTEM_PROMPT },
{ "role": "user", "content": user_content }
],
"temperature": 0.2,
"max_tokens": 8192,
"response_format": response_format
});
let timeout_sec = std::env::var("PAPAYU_LLM_TIMEOUT_SEC")
.ok()
.and_then(|s| s.trim().parse::<u64>().ok())
.unwrap_or(90);
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(timeout_sec))
.build()
.map_err(|e| format!("HTTP client: {}", e))?;
let mut req = client.post(api_url).json(&body);
if let Some(ref key) = api_key {
if !key.trim().is_empty() {
req = req.header("Authorization", format!("Bearer {}", key.trim()));
}
}
let resp = req.send().await.map_err(|e| format!("Request: {}", e))?;
let status = resp.status();
let text = resp.text().await.map_err(|e| format!("Response: {}", e))?;
if !status.is_success() {
return Err(format!("API error {}: {}", status, text));
}
let chat: serde_json::Value = serde_json::from_str(&text).map_err(|e| format!("Response JSON: {}", e))?;
let content = chat
.get("choices")
.and_then(|c| c.as_array())
.and_then(|a| a.first())
.and_then(|c| c.get("message"))
.and_then(|m| m.get("content"))
.and_then(|c| c.as_str())
.ok_or_else(|| "No content in API response".to_string())?;
let report: serde_json::Value = serde_json::from_str(content).map_err(|e| format!("Report JSON: {}", e))?;
let compiled = JSONSchema::options()
.with_draft(jsonschema::Draft::Draft7)
.compile(&schema)
.map_err(|e| format!("Schema compile: {}", e))?;
if let Err(e) = compiled.validate(&report) {
let msg: Vec<String> = e.map(|ve| format!("{}", ve)).collect();
return Err(format!("Schema validation: {}", msg.join("; ")));
}
Ok(report)
}
/// Собирает самодостаточный markdown: KPI-таблица и Top reasons в начале, затем текст LLM.
pub fn build_self_contained_md(stats: &WeeklyStatsBundle, llm_md: &str) -> String {
let mut md = format!(
"# Weekly Report\n\nПериод: {} — {}\n\n",
stats.period_from, stats.period_to
);
md.push_str("## KPI (фактические)\n\n");
md.push_str("| Метрика | Значение |\n|--------|----------|\n");
md.push_str(&format!("| apply_count | {} |\n", stats.apply_count));
md.push_str(&format!("| fallback_count | {} |\n", stats.fallback_count));
md.push_str(&format!("| fallback_rate | {:.4} |\n", stats.fallback_rate));
md.push_str(&format!("| fallback_excluding_non_utf8_rate | {:.4} |\n", stats.fallback_excluding_non_utf8_rate));
md.push_str(&format!("| repair_attempt_rate | {:.4} |\n", stats.repair_attempt_rate));
md.push_str(&format!("| repair_success_rate | {:.4} |\n", stats.repair_success_rate));
md.push_str(&format!("| repair_to_fallback_rate | {:.4} |\n", stats.repair_to_fallback_rate));
md.push_str(&format!("| sha_injection_rate | {:.4} |\n", stats.sha_injection_rate));
md.push_str("\n");
if !stats.fallback_by_reason.is_empty() {
md.push_str("## Top fallback reasons\n\n");
md.push_str("| Причина | Кол-во |\n|---------|--------|\n");
for (reason, count) in stats.fallback_by_reason.iter().take(10) {
md.push_str(&format!("| {} | {} |\n", reason, count));
}
md.push_str("\n");
}
if !stats.fallback_by_group.is_empty() {
md.push_str("## Fallback по группам\n\n");
md.push_str("| Группа | Кол-во |\n|--------|--------|\n");
for (group, count) in &stats.fallback_by_group {
md.push_str(&format!("| {} | {} |\n", group, count));
}
md.push_str("\n");
}
if !stats.new_error_codes.is_empty() {
md.push_str("## Новые error codes (кандидаты на golden trace)\n\n");
for (code, count) in &stats.new_error_codes {
md.push_str(&format!("- {} ({} раз)\n", code, count));
}
md.push_str("\n");
}
if let Some(ref deltas) = stats.deltas {
md.push_str("## Дельты vs предыдущая неделя\n\n");
md.push_str(&format!("| delta_apply_count | {} |\n", deltas.delta_apply_count));
md.push_str(&format!("| delta_fallback_rate | {:+.4} |\n", deltas.delta_fallback_rate));
md.push_str(&format!("| delta_repair_attempt_rate | {:+.4} |\n", deltas.delta_repair_attempt_rate));
md.push_str(&format!("| delta_repair_success_rate | {:+.4} |\n", deltas.delta_repair_success_rate));
md.push_str("\n");
}
md.push_str("---\n\n");
md.push_str(llm_md);
md
}
/// Формирует Markdown отчёт из LLM ответа.
pub fn report_to_md(report: &serde_json::Value) -> String {
let title = report.get("title").and_then(|v| v.as_str()).unwrap_or("Weekly Report");
let period = report.get("period");
let from = period.and_then(|p| p.get("from")).and_then(|v| v.as_str()).unwrap_or("?");
let to = period.and_then(|p| p.get("to")).and_then(|v| v.as_str()).unwrap_or("?");
let summary = report.get("summary_md").and_then(|v| v.as_str()).unwrap_or("");
let mut md = format!("# {}\n\nПериод: {}{}\n\n{}\n\n", title, from, to, summary);
if let Some(kpis) = report.get("kpis") {
md.push_str("## KPI\n\n");
md.push_str("| Метрика | Значение |\n|--------|----------|\n");
for (key, val) in kpis.as_object().unwrap_or(&serde_json::Map::new()) {
let v = match val {
serde_json::Value::Number(n) => n.to_string(),
serde_json::Value::String(s) => s.clone(),
_ => format!("{:?}", val),
};
md.push_str(&format!("| {} | {} |\n", key, v));
}
md.push_str("\n");
}
if let Some(findings) = report.get("findings").and_then(|v| v.as_array()) {
md.push_str("## Выводы\n\n");
for f in findings {
let sev = f.get("severity").and_then(|v| v.as_str()).unwrap_or("info");
let title_f = f.get("title").and_then(|v| v.as_str()).unwrap_or("");
let ev = f.get("evidence").and_then(|v| v.as_str()).unwrap_or("");
md.push_str(&format!("- **{}** [{}]: {}\n", title_f, sev, ev));
}
md.push_str("\n");
}
if let Some(recs) = report.get("recommendations").and_then(|v| v.as_array()) {
md.push_str("## Рекомендации\n\n");
for r in recs {
let pri = r.get("priority").and_then(|v| v.as_str()).unwrap_or("p2");
let title_r = r.get("title").and_then(|v| v.as_str()).unwrap_or("");
let rat = r.get("rationale").and_then(|v| v.as_str()).unwrap_or("");
md.push_str(&format!("- [{}] **{}**: {}{}\n", pri, title_r, rat, r.get("expected_impact").and_then(|v| v.as_str()).unwrap_or("")));
}
md.push_str("\n");
}
if let Some(actions) = report.get("operator_actions").and_then(|v| v.as_array()) {
md.push_str("## Действия оператора\n\n");
for a in actions {
let title_a = a.get("title").and_then(|v| v.as_str()).unwrap_or("");
let empty: Vec<serde_json::Value> = vec![];
let steps = a.get("steps").and_then(|v| v.as_array()).unwrap_or(&empty);
let est = a.get("time_estimate_minutes").and_then(|v| v.as_i64()).unwrap_or(0);
md.push_str(&format!("### {}\n\nОценка: {} мин\n\n", title_a, est));
for (i, s) in steps.iter().enumerate() {
if let Some(st) = s.as_str() {
md.push_str(&format!("{}. {}\n", i + 1, st));
}
}
md.push_str("\n");
}
}
md
}
/// Анализирует трассы и генерирует еженедельный отчёт.
pub async fn analyze_weekly_reports(
project_path: &Path,
from: Option<String>,
to: Option<String>,
) -> WeeklyReportResult {
let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or(Duration::from_secs(0));
let now_secs = now.as_secs();
let week_secs: u64 = 7 * 24 * 3600;
let (to_secs, from_secs) = if let (Some(f), Some(t)) = (&from, &to) {
let from_secs = chrono_parse_or_default(f, now_secs.saturating_sub(week_secs));
let to_secs = chrono_parse_or_default(t, now_secs);
(to_secs, from_secs)
} else {
(now_secs, now_secs.saturating_sub(week_secs))
};
let traces = match collect_traces(project_path, from_secs, to_secs) {
Ok(t) => t,
Err(e) => {
return WeeklyReportResult {
ok: false,
error: Some(e),
stats_bundle: None,
llm_report: None,
report_md: None,
};
}
};
let from_str = format_timestamp(from_secs);
let to_str = format_timestamp(to_secs);
let period_secs = to_secs.saturating_sub(from_secs);
let prev_from_secs = from_secs.saturating_sub(period_secs);
let prev_to_secs = from_secs;
let prev_from_str = format_timestamp(prev_from_secs);
let prev_to_str = format_timestamp(prev_to_secs);
let mut stats = aggregate_weekly(&traces, &from_str, &to_str);
let prev_traces = collect_traces(project_path, prev_from_secs, prev_to_secs).unwrap_or_default();
if !prev_traces.is_empty() {
let prev_stats = aggregate_weekly(&prev_traces, &prev_from_str, &prev_to_str);
stats.previous = Some(PreviousPeriodStats {
period_from: prev_stats.period_from,
period_to: prev_stats.period_to,
apply_count: prev_stats.apply_count,
fallback_count: prev_stats.fallback_count,
fallback_rate: prev_stats.fallback_rate,
fallback_excluding_non_utf8_rate: prev_stats.fallback_excluding_non_utf8_rate,
repair_attempt_rate: prev_stats.repair_attempt_rate,
repair_success_rate: prev_stats.repair_success_rate,
repair_to_fallback_rate: prev_stats.repair_to_fallback_rate,
sha_injection_rate: prev_stats.sha_injection_rate,
});
stats.deltas = Some(DeltaStats {
delta_apply_count: stats.apply_count as i64 - prev_stats.apply_count as i64,
delta_fallback_count: stats.fallback_count as i64 - prev_stats.fallback_count as i64,
delta_fallback_rate: stats.fallback_rate - prev_stats.fallback_rate,
delta_fallback_excluding_non_utf8_rate: stats.fallback_excluding_non_utf8_rate - prev_stats.fallback_excluding_non_utf8_rate,
delta_repair_attempt_rate: stats.repair_attempt_rate - prev_stats.repair_attempt_rate,
delta_repair_success_rate: stats.repair_success_rate - prev_stats.repair_success_rate,
delta_repair_to_fallback_rate: stats.repair_to_fallback_rate - prev_stats.repair_to_fallback_rate,
delta_sha_injection_rate: stats.sha_injection_rate - prev_stats.sha_injection_rate,
});
}
let golden = golden_trace_error_codes(project_path);
let mut new_counts: HashMap<String, u64> = HashMap::new();
for (code, count) in stats
.top_error_codes
.iter()
.map(|(k, v)| (k.as_str(), *v))
.chain(stats.fallback_by_reason.iter().map(|(k, v)| (k.as_str(), *v)))
{
if let Some(base) = extract_base_error_code(code) {
if !golden.contains(&base) {
*new_counts.entry(base).or_insert(0) += count;
}
}
}
let mut new_errors: Vec<(String, u64)> = new_counts.into_iter().collect();
new_errors.sort_by(|a, b| b.1.cmp(&a.1));
stats.new_error_codes = new_errors;
if traces.is_empty() {
let report_md = format!(
"# Weekly Report\n\nПериод: {} — {}\n\nТрасс за период не найдено. Включи PAPAYU_TRACE=1 и выполни несколько операций.",
from_str, to_str
);
return WeeklyReportResult {
ok: true,
error: None,
stats_bundle: Some(stats),
llm_report: None,
report_md: Some(report_md),
};
}
match call_llm_report(&stats, &traces).await {
Ok(report) => {
let llm_md = report_to_md(&report);
let report_md = build_self_contained_md(&stats, &llm_md);
WeeklyReportResult {
ok: true,
error: None,
stats_bundle: Some(stats),
llm_report: Some(report),
report_md: Some(report_md),
}
}
Err(e) => WeeklyReportResult {
ok: false,
error: Some(e),
stats_bundle: Some(stats),
llm_report: None,
report_md: None,
},
}
}
fn chrono_parse_or_default(s: &str, default: u64) -> u64 {
use chrono::{NaiveDate, NaiveDateTime};
let s = s.trim();
if s.is_empty() {
return default;
}
for fmt in ["%Y-%m-%d %H:%M:%S", "%Y-%m-%dT%H:%M:%S"] {
if let Ok(dt) = NaiveDateTime::parse_from_str(s, fmt) {
return dt.and_utc().timestamp() as u64;
}
}
if let Ok(d) = NaiveDate::parse_from_str(s, "%Y-%m-%d") {
if let Some(dt) = d.and_hms_opt(0, 0, 0) {
return dt.and_utc().timestamp() as u64;
}
}
default
}
fn format_timestamp(secs: u64) -> String {
use chrono::{DateTime, Utc};
let dt = DateTime::<Utc>::from_timestamp_secs(secs as i64)
.unwrap_or_else(|| DateTime::<Utc>::from_timestamp_secs(0).unwrap());
dt.format("%Y-%m-%d").to_string()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_aggregate_weekly_empty() {
let traces: Vec<(u64, serde_json::Value)> = vec![];
let stats = aggregate_weekly(&traces, "2024-01-01", "2024-01-07");
assert_eq!(stats.apply_count, 0);
assert_eq!(stats.fallback_count, 0);
assert_eq!(stats.fallback_excluding_non_utf8_rate, 0.0);
assert_eq!(stats.repair_success_rate, 0.0);
assert_eq!(stats.sha_injection_rate, 0.0);
}
#[test]
fn test_aggregate_weekly_llm_plan_ok() {
let traces = vec![
(
1704067200, // 2024-01-01: repair attempt that succeeded (no fallback)
serde_json::json!({
"event": "LLM_PLAN_OK",
"protocol_repair_attempt": 0,
"actions_count": 2,
"context_stats": { "context_total_chars": 1000, "context_files_count": 1, "context_files_dropped_count": 0 },
"cache_stats": { "hit_rate": 0.5, "env_hits": 0, "env_misses": 1, "read_hits": 1, "read_misses": 0, "search_hits": 0, "search_misses": 0, "logs_hits": 0, "logs_misses": 0 }
}),
),
(
1704153600, // repair failed → fallback plan
serde_json::json!({
"event": "LLM_PLAN_OK",
"protocol_repair_attempt": 1,
"protocol_fallback_attempted": true,
"protocol_fallback_reason": "ERR_PATCH_APPLY_FAILED",
"actions_count": 1,
"context_stats": { "context_total_chars": 500, "context_files_count": 1, "context_files_dropped_count": 0 },
"cache_stats": { "hit_rate": 0.6, "env_hits": 1, "env_misses": 0, "read_hits": 1, "read_misses": 0, "search_hits": 0, "search_misses": 0, "logs_hits": 0, "logs_misses": 0 }
}),
),
];
let stats = aggregate_weekly(&traces, "2024-01-01", "2024-01-07");
assert_eq!(stats.apply_count, 2);
assert_eq!(stats.fallback_count, 1);
assert!((stats.fallback_excluding_non_utf8_rate - 0.5).abs() < 0.001);
assert!((stats.repair_attempt_rate - 0.5).abs() < 0.001); // 1 repair attempt / 2 applies
assert!((stats.repair_success_rate - 0.0).abs() < 0.001); // 0/1 repair attempts succeeded
assert!((stats.repair_to_fallback_rate - 1.0).abs() < 0.001); // 1/1 went to fallback
assert_eq!(stats.fallback_by_reason.get("ERR_PATCH_APPLY_FAILED"), Some(&1));
}
#[test]
fn test_group_error_code() {
assert_eq!(group_error_code("ERR_SCHEMA_VALIDATION"), "LLM_FORMAT");
assert_eq!(group_error_code("ERR_JSON_PARSE"), "LLM_FORMAT");
assert_eq!(group_error_code("ERR_PATCH_APPLY_FAILED"), "PATCH");
assert_eq!(group_error_code("ERR_BASE_MISMATCH"), "PATCH");
assert_eq!(group_error_code("ERR_NON_UTF8_FILE"), "ENCODING");
assert_eq!(group_error_code("ERR_INVALID_PATH"), "SAFETY");
assert_eq!(group_error_code("ERR_V2_UPDATE_EXISTING_FORBIDDEN"), "V2_UPDATE");
}
#[test]
fn test_build_self_contained_md() {
let stats = WeeklyStatsBundle {
period_from: "2024-01-01".into(),
period_to: "2024-01-07".into(),
apply_count: 10,
fallback_count: 1,
fallback_rate: 0.1,
fallback_by_reason: [("ERR_PATCH_APPLY_FAILED".into(), 1)].into_iter().collect(),
fallback_by_group: [("PATCH".into(), 1)].into_iter().collect(),
fallback_excluding_non_utf8_rate: 0.1,
repair_attempt_rate: 0.2,
repair_success_rate: 0.9,
repair_to_fallback_rate: 0.1,
sha_injection_rate: 0.05,
top_sha_injected_paths: vec![],
top_error_codes: vec![],
error_codes_by_group: [("PATCH".into(), 1)].into_iter().collect(),
new_error_codes: vec![("ERR_XYZ".into(), 2)],
context: ContextAgg { avg_total_chars: 0.0, p95_total_chars: 0, avg_files_count: 0.0, avg_dropped_files: 0.0 },
cache: CacheAgg { avg_hit_rate: 0.0, env_hit_rate: 0.0, read_hit_rate: 0.0, search_hit_rate: 0.0, logs_hit_rate: 0.0 },
previous: None,
deltas: None,
};
let md = build_self_contained_md(&stats, "## LLM Summary\n\nText.");
assert!(md.contains("apply_count"));
assert!(md.contains("ERR_PATCH_APPLY_FAILED"));
assert!(md.contains("ERR_XYZ"));
assert!(md.contains("LLM Summary"));
}
#[test]
fn test_report_to_md() {
let report = serde_json::json!({
"title": "Test Report",
"period": { "from": "2024-01-01", "to": "2024-01-07" },
"summary_md": "Summary text.",
"kpis": { "apply_count": 10, "fallback_count": 1 },
"findings": [{ "severity": "info", "title": "Finding 1", "evidence": "Evidence 1" }],
"recommendations": [{ "priority": "p1", "title": "Rec 1", "rationale": "Why", "expected_impact": "Impact" }],
"operator_actions": [{ "title": "Action 1", "steps": ["Step 1"], "time_estimate_minutes": 5 }]
});
let md = report_to_md(&report);
assert!(md.contains("# Test Report"));
assert!(md.contains("Summary text."));
assert!(md.contains("apply_count"));
assert!(md.contains("Finding 1"));
assert!(md.contains("Rec 1"));
assert!(md.contains("Action 1"));
}
}
/// Сохраняет отчёт в docs/reports/weekly_YYYY-MM-DD.md.
pub fn save_report_to_file(
project_path: &Path,
report_md: &str,
date: Option<&str>,
) -> Result<String, String> {
let date_str = date
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.unwrap_or_else(|| chrono::Utc::now().format("%Y-%m-%d").to_string());
let reports_dir = project_path.join("docs").join("reports");
fs::create_dir_all(&reports_dir).map_err(|e| format!("create_dir: {}", e))?;
let file_path = reports_dir.join(format!("weekly_{}.md", date_str));
fs::write(&file_path, report_md).map_err(|e| format!("write: {}", e))?;
Ok(file_path.to_string_lossy().to_string())
}

View File

@ -1,11 +1,17 @@
//! Автосбор контекста для LLM: env, project prefs, context_requests (read_file, search, logs). //! Автосбор контекста для LLM: env, project prefs, context_requests (read_file, search, logs).
//! Кеш read/search/logs/env в пределах сессии (plan-цикла). //! Кеш read/search/logs/env в пределах сессии (plan-цикла).
//! Protocol v2: FILE[path] (sha256=...) для base_sha256 в PATCH_FILE.
use crate::memory::EngineeringMemory; use crate::memory::EngineeringMemory;
use sha2::{Digest, Sha256};
use std::collections::HashMap; use std::collections::HashMap;
use std::fs; use std::fs;
use std::path::Path; use std::path::Path;
fn protocol_version() -> u32 {
crate::protocol::protocol_version(None)
}
const MAX_CONTEXT_LINE_LEN: usize = 80_000; const MAX_CONTEXT_LINE_LEN: usize = 80_000;
const SEARCH_MAX_HITS: usize = 50; const SEARCH_MAX_HITS: usize = 50;
@ -23,7 +29,7 @@ fn context_max_file_chars() -> usize {
.unwrap_or(20_000) .unwrap_or(20_000)
} }
fn context_max_total_chars() -> usize { pub fn context_max_total_chars() -> usize {
std::env::var("PAPAYU_CONTEXT_MAX_TOTAL_CHARS") std::env::var("PAPAYU_CONTEXT_MAX_TOTAL_CHARS")
.ok() .ok()
.and_then(|s| s.trim().parse().ok()) .and_then(|s| s.trim().parse().ok())
@ -184,6 +190,7 @@ pub struct FulfillResult {
/// Выполняет context_requests от модели и возвращает текст для добавления в user message. /// Выполняет context_requests от модели и возвращает текст для добавления в user message.
/// Использует кеш, если передан; логирует CONTEXT_CACHE_HIT/MISS при trace_id. /// Использует кеш, если передан; логирует CONTEXT_CACHE_HIT/MISS при trace_id.
/// При protocol_version=2 добавляет sha256 в FILE-блоки: FILE[path] (sha256=...).
pub fn fulfill_context_requests( pub fn fulfill_context_requests(
project_root: &Path, project_root: &Path,
requests: &[serde_json::Value], requests: &[serde_json::Value],
@ -191,6 +198,7 @@ pub fn fulfill_context_requests(
mut cache: Option<&mut ContextCache>, mut cache: Option<&mut ContextCache>,
trace_id: Option<&str>, trace_id: Option<&str>,
) -> FulfillResult { ) -> FulfillResult {
let include_sha256 = protocol_version() == 2;
let mut parts = Vec::new(); let mut parts = Vec::new();
let mut logs_chars: usize = 0; let mut logs_chars: usize = 0;
for r in requests { for r in requests {
@ -222,8 +230,12 @@ pub fn fulfill_context_requests(
v v
} else { } else {
c.cache_stats.read_misses += 1; c.cache_stats.read_misses += 1;
let v = read_file_snippet(project_root, path, start as usize, end as usize); let (snippet, sha) = read_file_snippet_with_sha256(project_root, path, start as usize, end as usize);
let out = format!("FILE[{}]:\n{}", path, v); let out = if include_sha256 && !sha.is_empty() {
format!("FILE[{}] (sha256={}):\n{}", path, sha, snippet)
} else {
format!("FILE[{}]:\n{}", path, snippet)
};
if let Some(tid) = trace_id { if let Some(tid) = trace_id {
eprintln!("[{}] CONTEXT_CACHE_MISS key=read_file path={} size={}", tid, path, out.len()); eprintln!("[{}] CONTEXT_CACHE_MISS key=read_file path={} size={}", tid, path, out.len());
} }
@ -231,8 +243,12 @@ pub fn fulfill_context_requests(
out out
} }
} else { } else {
let v = read_file_snippet(project_root, path, start as usize, end as usize); let (snippet, sha) = read_file_snippet_with_sha256(project_root, path, start as usize, end as usize);
format!("FILE[{}]:\n{}", path, v) if include_sha256 && !sha.is_empty() {
format!("FILE[{}] (sha256={}):\n{}", path, sha, snippet)
} else {
format!("FILE[{}]:\n{}", path, snippet)
}
}; };
parts.push(content); parts.push(content);
} }
@ -408,6 +424,51 @@ pub fn fulfill_context_requests(
} }
} }
/// Читает файл и возвращает (snippet, sha256_hex). sha256 — от полного содержимого файла.
fn read_file_snippet_with_sha256(
root: &Path,
rel_path: &str,
start_line: usize,
end_line: usize,
) -> (String, String) {
let path = root.join(rel_path);
if !path.is_file() {
return (format!("(файл не найден: {})", rel_path), String::new());
}
let full_content = match fs::read_to_string(&path) {
Ok(c) => c,
Err(_) => return ("(не удалось прочитать)".to_string(), String::new()),
};
let sha256_hex = {
let mut hasher = Sha256::new();
hasher.update(full_content.as_bytes());
format!("{:x}", hasher.finalize())
};
let lines: Vec<&str> = full_content.lines().collect();
let start = start_line.saturating_sub(1).min(lines.len());
let end = end_line.min(lines.len()).max(start);
let slice: Vec<&str> = lines.get(start..end).unwrap_or(&[]).into_iter().copied().collect();
let mut out = String::new();
for (i, line) in slice.iter().enumerate() {
let line_no = start + i + 1;
out.push_str(&format!("{}|{}\n", line_no, line));
}
let max_chars = context_max_file_chars().min(MAX_CONTEXT_LINE_LEN);
let snippet = if out.len() > max_chars {
let head = (max_chars as f32 * 0.6) as usize;
let tail = max_chars - head - 30;
format!(
"{}...[TRUNCATED {} chars]...\n{}",
&out[..head.min(out.len())],
out.len(),
&out[out.len().saturating_sub(tail)..]
)
} else {
out
};
(snippet, sha256_hex)
}
fn read_file_snippet(root: &Path, rel_path: &str, start_line: usize, end_line: usize) -> String { fn read_file_snippet(root: &Path, rel_path: &str, start_line: usize, end_line: usize) -> String {
let path = root.join(rel_path); let path = root.join(rel_path);
if !path.is_file() { if !path.is_file() {
@ -551,6 +612,37 @@ pub fn gather_auto_context_from_message(project_root: &Path, user_message: &str)
} }
} }
/// Извлекает path → sha256 из контекста (FILE[path] (sha256=...):). Для диагностики и repair.
pub fn extract_file_sha256_from_context(context: &str) -> std::collections::HashMap<String, String> {
use std::collections::HashMap;
let mut m = HashMap::new();
for line in context.lines() {
if !line.starts_with("FILE[") {
continue;
}
let close = match line.find(']') {
Some(i) => i,
None => continue,
};
let path = &line[5..close];
let sha_tag = "(sha256=";
let sha_pos = match line.find(sha_tag) {
Some(i) => i,
None => continue,
};
let sha_start = sha_pos + sha_tag.len();
let sha_end = match line[sha_start..].find(')') {
Some(j) => sha_start + j,
None => continue,
};
let sha = &line[sha_start..sha_end];
if sha.len() == 64 && sha.bytes().all(|b| matches!(b, b'0'..=b'9' | b'a'..=b'f')) {
m.insert(path.to_string(), sha.to_string());
}
}
m
}
/// Извлекает пути и строки из traceback в тексте (Python). Используется при автосборе контекста по ошибке. /// Извлекает пути и строки из traceback в тексте (Python). Используется при автосборе контекста по ошибке.
pub fn extract_traceback_files(text: &str) -> Vec<(String, usize)> { pub fn extract_traceback_files(text: &str) -> Vec<(String, usize)> {
let mut out = Vec::new(); let mut out = Vec::new();
@ -661,4 +753,47 @@ mod tests {
assert!(files[0].0.contains("main.py")); assert!(files[0].0.contains("main.py"));
assert_eq!(files[0].1, 42); assert_eq!(files[0].1, 42);
} }
#[test]
fn test_extract_file_sha256_from_context() {
let ctx = r#"FILE[src/parser.py] (sha256=7f3f2a0c9f8b1a0c9b4c0f9e3d8a4b2d8c9e7f1a0b3c4d5e6f7a8b9c0d1e2f3a):
1|def parse
FILE[src/main.rs]:
fn main() {}"#;
let m = extract_file_sha256_from_context(ctx);
assert_eq!(m.len(), 1);
assert_eq!(m.get("src/parser.py").map(|s| s.as_str()), Some("7f3f2a0c9f8b1a0c9b4c0f9e3d8a4b2d8c9e7f1a0b3c4d5e6f7a8b9c0d1e2f3a"));
// src/main.rs без sha256 — не попадёт
assert!(m.get("src/main.rs").is_none());
let sha_a = "a".repeat(64);
let sha_b = "b".repeat(64);
let ctx2a = format!("FILE[a.py] (sha256={}):\ncontent\n", sha_a);
let ctx2b = format!("FILE[b.rs] (sha256={}):\ncontent\n", sha_b);
let m2a = extract_file_sha256_from_context(&ctx2a);
let m2b = extract_file_sha256_from_context(&ctx2b);
assert_eq!(m2a.len(), 1);
assert_eq!(m2b.len(), 1);
assert_eq!(m2a.get("a.py").map(|s| s.len()), Some(64));
assert_eq!(m2b.get("b.rs").map(|s| s.len()), Some(64));
}
#[test]
fn test_render_file_block_v2_includes_sha256() {
use std::fs;
let dir = tempfile::tempdir().unwrap();
let root = dir.path();
fs::create_dir_all(root.join("src")).unwrap();
fs::write(root.join("src/main.rs"), "fn main() {}\n").unwrap();
std::env::set_var("PAPAYU_PROTOCOL_VERSION", "2");
let reqs = vec![serde_json::json!({"type": "read_file", "path": "src/main.rs", "start_line": 1, "end_line": 10})];
let result = fulfill_context_requests(root, &reqs, 200, None, None);
std::env::remove_var("PAPAYU_PROTOCOL_VERSION");
assert!(result.content.contains("FILE[src/main.rs] (sha256="));
assert!(result.content.contains("):"));
let m = extract_file_sha256_from_context(&result.content);
assert_eq!(m.len(), 1);
assert_eq!(m.get("src/main.rs").map(|s| s.len()), Some(64));
}
} }

View File

@ -1,12 +1,15 @@
mod commands; mod commands;
mod context; mod context;
mod online_research;
mod memory; mod memory;
mod patch;
mod protocol;
mod store; mod store;
mod tx; mod tx;
mod types; mod types;
mod verify; mod verify;
use commands::{add_project, agentic_run, analyze_project, append_session_event, apply_actions, apply_actions_tx, export_settings, fetch_trends_recommendations, generate_actions, generate_actions_from_report, get_project_profile, get_project_settings, get_trends_recommendations, get_undo_redo_state_cmd, import_settings, list_projects, list_sessions, load_folder_links, preview_actions, propose_actions, redo_last, run_batch, save_folder_links, set_project_settings, undo_available, undo_last, undo_last_tx, undo_status}; use commands::{add_project, agentic_run, analyze_project, analyze_weekly_reports, append_session_event, apply_actions, apply_actions_tx, export_settings, fetch_trends_recommendations, generate_actions, generate_actions_from_report, get_project_profile, get_project_settings, get_trends_recommendations, get_undo_redo_state_cmd, import_settings, list_projects, list_sessions, load_folder_links, preview_actions, propose_actions, redo_last, run_batch, save_folder_links, save_report_to_file, set_project_settings, undo_available, undo_last, undo_last_tx, undo_status};
use tauri::Manager; use tauri::Manager;
use commands::FolderLinks; use commands::FolderLinks;
use types::{ApplyPayload, BatchPayload}; use types::{ApplyPayload, BatchPayload};
@ -49,6 +52,32 @@ fn verify_project(path: String) -> types::VerifyResult {
verify::verify_project(&path) verify::verify_project(&path)
} }
/// Анализ еженедельных отчётов: агрегация трасс и генерация отчёта через LLM.
#[tauri::command]
async fn analyze_weekly_reports_cmd(
project_path: String,
from: Option<String>,
to: Option<String>,
) -> commands::WeeklyReportResult {
analyze_weekly_reports(std::path::Path::new(&project_path), from, to).await
}
/// Online research: поиск + fetch + LLM summarize.
#[tauri::command]
async fn research_answer_cmd(query: String) -> Result<online_research::OnlineAnswer, String> {
online_research::research_answer(&query).await
}
/// Сохранить отчёт в docs/reports/weekly_YYYY-MM-DD.md.
#[tauri::command]
fn save_report_cmd(project_path: String, report_md: String, date: Option<String>) -> Result<String, String> {
save_report_to_file(
std::path::Path::new(&project_path),
&report_md,
date.as_deref(),
)
}
#[cfg_attr(mobile, tauri::mobile_entry_point)] #[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() { pub fn run() {
tauri::Builder::default() tauri::Builder::default()
@ -84,6 +113,9 @@ pub fn run() {
fetch_trends_recommendations, fetch_trends_recommendations,
export_settings, export_settings,
import_settings, import_settings,
analyze_weekly_reports_cmd,
save_report_cmd,
research_answer_cmd,
]) ])
.run(tauri::generate_context!()) .run(tauri::generate_context!())
.expect("error while running tauri application"); .expect("error while running tauri application");

View File

@ -0,0 +1,120 @@
//! Извлечение текста из HTML.
use scraper::{Html, Selector};
pub(crate) const MAX_CHARS: usize = 40_000;
/// Извлекает текст из HTML: убирает script/style, берёт body, нормализует пробелы.
pub fn extract_text(html: &str) -> String {
let doc = Html::parse_document(html);
let body_html = match Selector::parse("body") {
Ok(s) => doc.select(&s).next().map(|el| el.html()),
Err(_) => None,
};
let fragment = body_html.unwrap_or_else(|| doc.root_element().html());
let without_script = remove_tag_content(&fragment, "script");
let without_style = remove_tag_content(&without_script, "style");
let without_noscript = remove_tag_content(&without_style, "noscript");
let cleaned = strip_tags_simple(&without_noscript);
let normalized = normalize_whitespace(&cleaned);
truncate_to(&normalized, MAX_CHARS)
}
fn remove_tag_content(html: &str, tag: &str) -> String {
let open = format!("<{}", tag);
let close = format!("</{}>", tag);
let mut out = String::with_capacity(html.len());
let mut i = 0;
let bytes = html.as_bytes();
while i < bytes.len() {
if let Some(start) = find_ignore_case(bytes, i, &open) {
let after_open = start + open.len();
if let Some(end) = find_ignore_case(bytes, after_open, &close) {
out.push_str(&html[i..start]);
i = end + close.len();
continue;
}
}
if i < bytes.len() {
out.push(html.chars().nth(i).unwrap_or(' '));
i += 1;
}
}
if out.is_empty() {
html.to_string()
} else {
out
}
}
fn find_ignore_case(haystack: &[u8], start: usize, needle: &str) -> Option<usize> {
let needle_bytes = needle.as_bytes();
haystack[start..]
.windows(needle_bytes.len())
.position(|w| w.eq_ignore_ascii_case(needle_bytes))
.map(|p| start + p)
}
fn strip_tags_simple(html: &str) -> String {
let doc = Html::parse_fragment(html);
let root = doc.root_element();
let mut text = root.text().collect::<Vec<_>>().join(" ");
text = text.replace("\u{a0}", " ");
text
}
fn normalize_whitespace(s: &str) -> String {
let mut out = String::with_capacity(s.len());
let mut prev_space = false;
for c in s.chars() {
if c.is_whitespace() {
if !prev_space {
out.push(' ');
prev_space = true;
}
} else {
out.push(c);
prev_space = false;
}
}
out.trim().to_string()
}
pub(crate) fn truncate_to(s: &str, max: usize) -> String {
if s.chars().count() <= max {
s.to_string()
} else {
s.chars().take(max).collect::<String>() + "..."
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_extract_text_basic() {
let html = r#"<html><body><h1>Title</h1><p>Paragraph text.</p></body></html>"#;
let t = extract_text(html);
assert!(t.contains("Title"));
assert!(t.contains("Paragraph"));
}
#[test]
fn test_extract_removes_script() {
let html = r#"<body><p>Hello</p><script>alert(1)</script><p>World</p></body>"#;
let t = extract_text(html);
assert!(!t.contains("alert"));
assert!(t.contains("Hello"));
assert!(t.contains("World"));
}
#[test]
fn test_truncate_to() {
let s = "a".repeat(50_000);
let t = super::truncate_to(&s, super::MAX_CHARS);
assert!(t.ends_with("..."));
assert!(t.chars().count() <= super::MAX_CHARS + 3);
}
}

View File

@ -0,0 +1,130 @@
//! Decision layer for online fallback.
/// Триггеры online fallback.
const ONLINE_FALLBACK_ERROR_CODES: &[&str] = &[
"LLM_REQUEST_TIMEOUT",
"ERR_JSON_PARSE",
"ERR_JSON_EXTRACT",
"ERR_SCHEMA_VALIDATION",
];
/// Решает, нужно ли предлагать online fallback по ошибке PRIMARY.
///
/// Triggers: timeout, ERR_JSON_PARSE/ERR_JSON_EXTRACT/ERR_SCHEMA_VALIDATION after repair,
/// или явный NEEDS_ONLINE_RESEARCH в summary/context_requests.
///
/// Ограничение: один раз на запрос (online_fallback_already_attempted).
pub fn maybe_online_fallback(
error_message: Option<&str>,
online_enabled: bool,
online_fallback_already_attempted: bool,
) -> bool {
if !online_enabled || online_fallback_already_attempted {
return false;
}
let msg = match error_message {
Some(m) => m,
None => return false,
};
let code = extract_error_code_prefix(msg);
ONLINE_FALLBACK_ERROR_CODES.contains(&code)
}
/// Извлекает префикс вида "ERR_XXX:" или "LLM_REQUEST_TIMEOUT:" из сообщения.
pub fn extract_error_code_prefix(msg: &str) -> &str {
if let Some(colon) = msg.find(':') {
let prefix = msg[..colon].trim();
if !prefix.is_empty() && prefix.chars().all(|c| c.is_ascii_alphanumeric() || c == '_') {
return prefix;
}
}
""
}
/// Проверяет наличие NEEDS_ONLINE_RESEARCH или ONLINE: в summary/context_requests.
#[allow(dead_code)]
pub fn extract_needs_online_from_plan(summary: Option<&str>, context_requests_json: Option<&str>) -> Option<String> {
if let Some(s) = summary {
if let Some(q) = extract_online_query_from_text(s) {
return Some(q);
}
}
if let Some(json) = context_requests_json {
if let Ok(arr) = serde_json::from_str::<Vec<serde_json::Value>>(json) {
for req in arr {
if let Some(obj) = req.as_object() {
let ty = obj.get("type").and_then(|v| v.as_str()).unwrap_or("");
let query = obj.get("query").and_then(|v| v.as_str()).unwrap_or("");
if ty == "search" && query.starts_with("ONLINE:") {
let q = query.strip_prefix("ONLINE:").map(|s| s.trim()).unwrap_or(query).to_string();
if !q.is_empty() {
return Some(q);
}
}
}
}
}
}
None
}
#[allow(dead_code)]
fn extract_online_query_from_text(s: &str) -> Option<String> {
if let Some(idx) = s.find("NEEDS_ONLINE_RESEARCH:") {
let rest = &s[idx + "NEEDS_ONLINE_RESEARCH:".len()..];
let q = rest.lines().next().map(|l| l.trim()).unwrap_or(rest.trim());
if !q.is_empty() {
return Some(q.to_string());
}
}
None
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_maybe_online_timeout() {
assert!(maybe_online_fallback(
Some("LLM_REQUEST_TIMEOUT: Request: timed out"),
true,
false
));
}
#[test]
fn test_maybe_online_schema() {
assert!(maybe_online_fallback(
Some("ERR_SCHEMA_VALIDATION: missing required property"),
true,
false
));
}
#[test]
fn test_maybe_online_disabled() {
assert!(!maybe_online_fallback(
Some("ERR_SCHEMA_VALIDATION: x"),
false,
false
));
}
#[test]
fn test_maybe_online_already_attempted() {
assert!(!maybe_online_fallback(
Some("ERR_SCHEMA_VALIDATION: x"),
true,
true
));
}
#[test]
fn test_extract_needs_online() {
assert_eq!(
extract_needs_online_from_plan(Some("NEEDS_ONLINE_RESEARCH: latest React version"), None),
Some("latest React version".to_string())
);
}
}

View File

@ -0,0 +1,144 @@
//! SSRF-safe HTTP fetch: запрет localhost, RFC1918, link-local.
use std::net::IpAddr;
use url::Url;
/// Проверяет, разрешён ли URL для fetch (запрет SSRF).
fn is_url_allowed(u: &Url) -> bool {
let scheme = u.scheme().to_lowercase();
if scheme != "http" && scheme != "https" {
return false;
}
let host = match u.host_str() {
Some(h) => h,
None => return false,
};
let host_lower = host.to_lowercase();
if host_lower == "localhost"
|| host_lower == "127.0.0.1"
|| host_lower == "::1"
|| host_lower.ends_with(".localhost")
{
return false;
}
let host_clean = host.trim_matches(|c| c == '[' || c == ']');
if let Ok(ip) = host_clean.parse::<IpAddr>() {
if ip.is_loopback() {
return false;
}
if let IpAddr::V4(v4) = ip {
if v4.is_private() {
return false;
}
if v4.is_link_local() {
return false;
}
let octets = v4.octets();
if octets[0] == 169 && octets[1] == 254 {
return false;
}
}
if let IpAddr::V6(v6) = ip {
if v6.is_loopback() {
return false;
}
let s = v6.to_string();
if s.starts_with("fe80") || s.starts_with("fe8") || s.starts_with("fe9") {
return false;
}
}
}
true
}
/// Скачивает URL с ограничениями по размеру и таймауту. SSRF-safe.
pub async fn fetch_url_safe(
url_str: &str,
max_bytes: usize,
timeout_sec: u64,
) -> Result<String, String> {
let url = Url::parse(url_str).map_err(|e| format!("Invalid URL: {}", e))?;
if !is_url_allowed(&url) {
return Err("URL not allowed (SSRF protection)".into());
}
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(timeout_sec))
.redirect(reqwest::redirect::Policy::limited(5))
.build()
.map_err(|e| format!("HTTP client: {}", e))?;
let resp = client
.get(url.as_str())
.send()
.await
.map_err(|e| format!("Request: {}", e))?;
let final_url = resp.url().clone();
if !is_url_allowed(&final_url) {
return Err("Redirect to disallowed URL (SSRF protection)".into());
}
let status = resp.status();
if !status.is_success() {
return Err(format!("HTTP {}", status));
}
let content_type = resp
.headers()
.get("content-type")
.and_then(|v| v.to_str().ok())
.unwrap_or("")
.to_lowercase();
if !content_type.is_empty()
&& !content_type.contains("text/html")
&& !content_type.contains("text/plain")
&& !content_type.contains("application/json")
&& !content_type.contains("application/xhtml")
{
return Err(format!("Unsupported content-type: {}", content_type));
}
let bytes = resp.bytes().await.map_err(|e| format!("Body: {}", e))?;
if bytes.len() > max_bytes {
return Err(format!("Response too large: {} > {}", bytes.len(), max_bytes));
}
let text = String::from_utf8_lossy(&bytes);
Ok(text.to_string())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_ssrf_block_localhost() {
assert!(!is_url_allowed(&Url::parse("http://localhost/").unwrap()));
assert!(!is_url_allowed(&Url::parse("http://127.0.0.1/").unwrap()));
assert!(!is_url_allowed(&Url::parse("http://[::1]/").unwrap()));
}
#[test]
fn test_ssrf_block_rfc1918() {
assert!(!is_url_allowed(&Url::parse("http://192.168.1.1/").unwrap()));
assert!(!is_url_allowed(&Url::parse("http://10.0.0.1/").unwrap()));
assert!(!is_url_allowed(&Url::parse("http://172.16.0.1/").unwrap()));
}
#[test]
fn test_ssrf_block_link_local() {
assert!(!is_url_allowed(&Url::parse("http://169.254.1.1/").unwrap()));
}
#[test]
fn test_ssrf_allow_public() {
assert!(is_url_allowed(&Url::parse("https://example.com/").unwrap()));
assert!(is_url_allowed(&Url::parse("https://8.8.8.8/").unwrap()));
}
#[test]
fn test_ssrf_block_file() {
assert!(!is_url_allowed(&Url::parse("file:///etc/passwd").unwrap()));
}
}

View File

@ -0,0 +1,167 @@
//! LLM summarize with sources (OpenAI Chat Completions + json_schema).
use jsonschema::JSONSchema;
use super::{OnlineAnswer, OnlineSource, SearchResult};
const SYSTEM_PROMPT: &str = r#"Ты отвечаешь на вопрос, используя ТОЛЬКО предоставленные источники (вырезки веб-страниц).
Если в источниках нет ответа скажи, что данных недостаточно, и предложи уточняющий запрос.
В ответе:
- answer_md: кратко и по делу (markdown)
- sources: перечисли 25 наиболее релевантных URL, которые реально использовал
- confidence: 0..1 (0.3 если источники слабые/противоречат)
Не выдумывай факты. Не используй знания вне источников."#;
/// Суммаризирует страницы через LLM с response_format json_schema.
pub async fn summarize_with_sources(
query: &str,
pages: &[(String, String, String)],
search_results: &[SearchResult],
) -> Result<OnlineAnswer, String> {
let api_url = std::env::var("PAPAYU_LLM_API_URL").map_err(|_| "PAPAYU_LLM_API_URL not set")?;
let api_url = api_url.trim();
if api_url.is_empty() {
return Err("PAPAYU_LLM_API_URL is empty".into());
}
let model = std::env::var("PAPAYU_ONLINE_MODEL")
.or_else(|_| std::env::var("PAPAYU_LLM_MODEL"))
.unwrap_or_else(|_| "gpt-4o-mini".to_string());
let api_key = std::env::var("PAPAYU_LLM_API_KEY").ok();
let schema: serde_json::Value =
serde_json::from_str(include_str!("../../config/llm_online_answer_schema.json"))
.map_err(|e| format!("schema: {}", e))?;
let mut sources_block = String::new();
for (i, (url, title, text)) in pages.iter().enumerate() {
let truncated = if text.len() > 15_000 {
format!("{}...", &text[..15_000])
} else {
text.clone()
};
sources_block.push_str(&format!(
"\n\n--- Источник {}: {} ---\nURL: {}\n\n{}\n",
i + 1,
title,
url,
truncated
));
}
let user_content = format!(
"Вопрос: {}\n\nИспользуй только эти источники для ответа:\n{}",
query, sources_block
);
let response_format = serde_json::json!({
"type": "json_schema",
"json_schema": {
"name": "online_answer",
"schema": schema,
"strict": true
}
});
let body = serde_json::json!({
"model": model.trim(),
"messages": [
{ "role": "system", "content": SYSTEM_PROMPT },
{ "role": "user", "content": user_content }
],
"temperature": 0.2,
"max_tokens": 4096,
"response_format": response_format
});
let timeout_sec = std::env::var("PAPAYU_ONLINE_TIMEOUT_SEC")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(20);
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(timeout_sec))
.build()
.map_err(|e| format!("HTTP: {}", e))?;
let mut req = client.post(api_url).json(&body);
if let Some(ref key) = api_key {
if !key.trim().is_empty() {
req = req.header("Authorization", format!("Bearer {}", key.trim()));
}
}
let resp = req.send().await.map_err(|e| format!("Request: {}", e))?;
let status = resp.status();
let text = resp.text().await.map_err(|e| format!("Response: {}", e))?;
if !status.is_success() {
return Err(format!("API {}: {}", status, text));
}
let chat: serde_json::Value =
serde_json::from_str(&text).map_err(|e| format!("JSON: {}", e))?;
let content = chat
.get("choices")
.and_then(|c| c.as_array())
.and_then(|a| a.first())
.and_then(|c| c.get("message"))
.and_then(|m| m.get("content"))
.and_then(|c| c.as_str())
.ok_or("No content in response")?;
let report: serde_json::Value =
serde_json::from_str(content).map_err(|e| format!("Report JSON: {}", e))?;
let compiled = JSONSchema::options()
.with_draft(jsonschema::Draft::Draft7)
.compile(&schema)
.map_err(|e| format!("Schema: {}", e))?;
if let Err(e) = compiled.validate(&report) {
let msg: Vec<String> = e.map(|ve| format!("{}", ve)).collect();
return Err(format!("Validation: {}", msg.join("; ")));
}
let answer_md = report
.get("answer_md")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let confidence = report.get("confidence").and_then(|v| v.as_f64()).unwrap_or(0.0);
let notes = report.get("notes").and_then(|v| v.as_str()).map(|s| s.to_string());
let sources: Vec<OnlineSource> = report
.get("sources")
.and_then(|v| v.as_array())
.unwrap_or(&vec![])
.iter()
.filter_map(|s| {
let url = s.get("url")?.as_str()?.to_string();
let title = s.get("title")?.as_str().unwrap_or("").to_string();
let published_at = s.get("published_at").and_then(|v| v.as_str()).map(|s| s.to_string());
let snippet = s.get("snippet").and_then(|v| v.as_str()).map(|s| s.to_string());
Some(OnlineSource {
url,
title,
published_at,
snippet,
})
})
.collect();
let mut final_sources = sources;
if final_sources.is_empty() {
for r in search_results.iter().take(5) {
final_sources.push(OnlineSource {
url: r.url.clone(),
title: r.title.clone(),
published_at: None,
snippet: r.snippet.clone(),
});
}
}
Ok(OnlineAnswer {
answer_md,
sources: final_sources,
confidence,
notes,
})
}

View File

@ -0,0 +1,155 @@
//! Online Research Fallback: Search API + Fetch + LLM.
//!
//! Env: PAPAYU_ONLINE_RESEARCH, PAPAYU_SEARCH_PROVIDER, PAPAYU_TAVILY_API_KEY,
//! PAPAYU_ONLINE_MODEL, PAPAYU_ONLINE_MAX_SOURCES, PAPAYU_ONLINE_MAX_PAGES,
//! PAPAYU_ONLINE_PAGE_MAX_BYTES, PAPAYU_ONLINE_TIMEOUT_SEC.
mod online_context;
mod extract;
mod fallback;
mod fetch;
mod llm;
mod search;
#[cfg(test)]
mod online_context_auto_test;
pub use fallback::{maybe_online_fallback, extract_error_code_prefix};
pub use self::online_context::{
build_online_context_block, effective_online_max_chars, online_context_max_chars,
online_context_max_sources, OnlineBlockResult,
};
use serde::{Deserialize, Serialize};
pub use search::SearchResult;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OnlineAnswer {
pub answer_md: String,
pub sources: Vec<OnlineSource>,
pub confidence: f64,
#[serde(skip_serializing_if = "Option::is_none")]
pub notes: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OnlineSource {
pub url: String,
pub title: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub published_at: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub snippet: Option<String>,
}
/// Orchestrates: search → fetch → extract → LLM summarize.
pub async fn research_answer(query: &str) -> Result<OnlineAnswer, String> {
if !is_online_research_enabled() {
return Err("Online research disabled (PAPAYU_ONLINE_RESEARCH=1 to enable)".into());
}
let max_sources = max_sources();
let max_pages = max_pages();
let page_max_bytes = page_max_bytes();
let timeout_sec = timeout_sec();
let search_results = search::tavily_search(query, max_sources).await?;
let mut pages: Vec<(String, String, String)> = vec![];
let mut fetch_failures = 0usize;
for r in search_results.iter().take(max_pages) {
match fetch::fetch_url_safe(&r.url, page_max_bytes, timeout_sec).await {
Ok(body) => {
let text = extract::extract_text(&body);
if !text.trim().is_empty() {
pages.push((r.url.clone(), r.title.clone(), text));
}
}
Err(e) => {
fetch_failures += 1;
eprintln!("[online_research] fetch {} failed: {}", r.url, e);
}
}
}
let online_model = std::env::var("PAPAYU_ONLINE_MODEL")
.or_else(|_| std::env::var("PAPAYU_LLM_MODEL"))
.unwrap_or_else(|_| "gpt-4o-mini".to_string());
eprintln!(
"[trace] ONLINE_RESEARCH query_len={} sources_count={} pages_fetched={} fetch_failures={} model={}",
query.len(),
search_results.len(),
pages.len(),
fetch_failures,
online_model.trim()
);
if pages.is_empty() {
return Ok(OnlineAnswer {
answer_md: format!(
"Не удалось загрузить источники для запроса «{}». Попробуйте уточнить запрос или проверить доступность поиска.",
query
),
sources: search_results
.iter()
.take(5)
.map(|r| OnlineSource {
url: r.url.clone(),
title: r.title.clone(),
published_at: None,
snippet: r.snippet.clone(),
})
.collect(),
confidence: 0.0,
notes: Some("No pages fetched".into()),
});
}
llm::summarize_with_sources(query, &pages, &search_results).await
}
pub fn is_online_research_enabled() -> bool {
std::env::var("PAPAYU_ONLINE_RESEARCH")
.ok()
.map(|s| matches!(s.trim().to_lowercase().as_str(), "1" | "true" | "yes"))
.unwrap_or(false)
}
/// Проверяет, включен ли auto-use as context для online research.
pub fn is_online_auto_use_as_context() -> bool {
std::env::var("PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT")
.ok()
.map(|s| matches!(s.trim().to_lowercase().as_str(), "1" | "true" | "yes"))
.unwrap_or(false)
}
fn max_sources() -> usize {
std::env::var("PAPAYU_ONLINE_MAX_SOURCES")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(5)
.clamp(1, 20)
}
fn max_pages() -> usize {
std::env::var("PAPAYU_ONLINE_MAX_PAGES")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(4)
.clamp(1, 10)
}
fn page_max_bytes() -> usize {
std::env::var("PAPAYU_ONLINE_PAGE_MAX_BYTES")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(200_000)
.clamp(10_000, 500_000)
}
fn timeout_sec() -> u64 {
std::env::var("PAPAYU_ONLINE_TIMEOUT_SEC")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(20)
.clamp(5, 60)
}

View File

@ -0,0 +1,160 @@
//! Online context: truncation, sanitization, block building.
/// Максимум символов для online summary (PAPAYU_ONLINE_CONTEXT_MAX_CHARS).
pub fn online_context_max_chars() -> usize {
std::env::var("PAPAYU_ONLINE_CONTEXT_MAX_CHARS")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(8000)
.clamp(256, 32_000)
}
/// Максимум источников (PAPAYU_ONLINE_CONTEXT_MAX_SOURCES).
pub fn online_context_max_sources() -> usize {
std::env::var("PAPAYU_ONLINE_CONTEXT_MAX_SOURCES")
.ok()
.and_then(|s| s.trim().parse().ok())
.unwrap_or(10)
.clamp(1, 20)
}
/// Урезает и санитизирует online markdown: по char boundary, без NUL/control, \r\n -> \n.
pub fn truncate_online_context(md: &str, max_chars: usize) -> String {
let sanitized: String = md
.chars()
.filter(|c| !c.is_control() || *c == '\n' || *c == '\t')
.collect();
let normalized = sanitized.replace("\r\n", "\n").replace('\r', "\n");
if normalized.chars().count() <= max_chars {
normalized
} else {
normalized.chars().take(max_chars).collect::<String>() + "..."
}
}
/// Результат сборки online-блока: (block, was_truncated, dropped).
#[derive(Clone, Debug)]
pub struct OnlineBlockResult {
pub block: String,
pub was_truncated: bool,
pub dropped: bool,
pub chars_used: usize,
pub sources_count: usize,
}
/// Собирает блок ONLINE_RESEARCH_SUMMARY + ONLINE_SOURCES для вставки в prompt.
/// sources — список URL (обрезается по max_sources).
pub fn build_online_context_block(md: &str, sources: &[String], max_chars: usize, max_sources: usize) -> OnlineBlockResult {
let truncated = truncate_online_context(md, max_chars);
let was_truncated = md.chars().count() > max_chars;
if truncated.trim().len() < 64 {
return OnlineBlockResult {
block: String::new(),
was_truncated: false,
dropped: true,
chars_used: 0,
sources_count: 0,
};
}
let sources_trimmed: Vec<&str> = sources.iter().map(|s| s.as_str()).take(max_sources).collect();
let mut block = String::new();
block.push_str("\n\nONLINE_RESEARCH_SUMMARY:\n");
block.push_str(&truncated);
block.push_str("\n\nONLINE_SOURCES:\n");
for url in &sources_trimmed {
block.push_str("- ");
block.push_str(url);
block.push('\n');
}
let chars_used = block.chars().count();
OnlineBlockResult {
block,
was_truncated,
dropped: false,
chars_used,
sources_count: sources_trimmed.len(),
}
}
/// Вычисляет допустимый max_chars для online с учётом общего бюджета.
/// rest_context_chars — размер base + prompt_body + auto без online.
/// priority0_reserved — минимальный резерв для FILE (4096).
/// Если после вычета online осталось бы < 512 chars — вернёт 0 (drop).
pub fn effective_online_max_chars(
rest_context_chars: usize,
max_total: usize,
priority0_reserved: usize,
) -> usize {
let available = max_total.saturating_sub(rest_context_chars).saturating_sub(priority0_reserved);
if available < 512 {
0
} else {
available
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_truncate_online_context_limits() {
let md = "a".repeat(10_000);
let t = truncate_online_context(&md, 1000);
assert!(t.len() <= 1004); // 1000 + "..."
assert!(t.ends_with("..."));
}
#[test]
fn test_truncate_removes_control() {
let md = "hello\x00world\nok";
let t = truncate_online_context(md, 100);
assert!(!t.contains('\x00'));
assert!(t.contains("hello"));
}
#[test]
fn test_truncate_normalizes_crlf() {
let md = "a\r\nb\r\nc";
let t = truncate_online_context(md, 100);
assert!(!t.contains("\r"));
}
#[test]
fn test_build_block_dropped_when_short() {
let r = build_online_context_block("x", &[], 8000, 10);
assert!(r.block.is_empty());
assert!(r.dropped);
}
#[test]
fn test_build_block_contains_summary() {
let md = "This is a longer summary with enough content to pass the 64 char minimum.";
let r = build_online_context_block(md, &["https://example.com".into()], 8000, 10);
assert!(!r.dropped);
assert!(r.block.contains("ONLINE_RESEARCH_SUMMARY:"));
assert!(r.block.contains("ONLINE_SOURCES:"));
assert!(r.block.contains("https://example.com"));
}
#[test]
fn test_effective_online_max_chars_drops_when_budget_small() {
let rest = 119_000;
let max_total = 120_000;
let reserved = 4096;
let effective = effective_online_max_chars(rest, max_total, reserved);
assert_eq!(effective, 0);
}
#[test]
fn test_effective_online_max_chars_returns_available() {
let rest = 50_000;
let max_total = 120_000;
let reserved = 4096;
let effective = effective_online_max_chars(rest, max_total, reserved);
assert!(effective >= 65_000);
}
}

View File

@ -0,0 +1,37 @@
//! Tests for auto-use online context flow.
#[cfg(test)]
mod tests {
use crate::online_research;
#[test]
fn test_is_online_auto_use_disabled_by_default() {
std::env::remove_var("PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT");
assert!(!online_research::is_online_auto_use_as_context());
}
#[test]
fn test_is_online_auto_use_enabled_when_set() {
std::env::set_var("PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT", "1");
assert!(online_research::is_online_auto_use_as_context());
std::env::remove_var("PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT");
}
#[test]
fn test_extract_error_code_prefix_timeout() {
let msg = "LLM_REQUEST_TIMEOUT: request timed out";
assert_eq!(online_research::extract_error_code_prefix(msg), "LLM_REQUEST_TIMEOUT");
}
#[test]
fn test_extract_error_code_prefix_schema() {
let msg = "ERR_SCHEMA_VALIDATION: missing required property";
assert_eq!(online_research::extract_error_code_prefix(msg), "ERR_SCHEMA_VALIDATION");
}
#[test]
fn test_extract_error_code_prefix_empty_when_no_prefix() {
let msg = "Some generic error message";
assert_eq!(online_research::extract_error_code_prefix(msg), "");
}
}

View File

@ -0,0 +1,68 @@
//! Search provider: Tavily API.
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SearchResult {
pub title: String,
pub url: String,
pub snippet: Option<String>,
}
/// Tavily Search API: POST https://api.tavily.com/search
pub async fn tavily_search(query: &str, max_results: usize) -> Result<Vec<SearchResult>, String> {
let api_key = std::env::var("PAPAYU_TAVILY_API_KEY")
.map_err(|_| "PAPAYU_TAVILY_API_KEY not set")?;
let api_key = api_key.trim();
if api_key.is_empty() {
return Err("PAPAYU_TAVILY_API_KEY is empty".into());
}
let body = serde_json::json!({
"query": query,
"max_results": max_results,
"include_answer": false,
"include_raw_content": false,
});
let timeout = std::time::Duration::from_secs(15);
let client = reqwest::Client::builder()
.timeout(timeout)
.build()
.map_err(|e| format!("HTTP client: {}", e))?;
let resp = client
.post("https://api.tavily.com/search")
.header("Content-Type", "application/json")
.header("Authorization", format!("Bearer {}", api_key))
.json(&body)
.send()
.await
.map_err(|e| format!("Tavily request: {}", e))?;
let status = resp.status();
let text = resp.text().await.map_err(|e| format!("Response: {}", e))?;
if !status.is_success() {
return Err(format!("Tavily API {}: {}", status, text));
}
let val: serde_json::Value =
serde_json::from_str(&text).map_err(|e| format!("Tavily JSON: {}", e))?;
let results = val
.get("results")
.and_then(|r| r.as_array())
.ok_or_else(|| "Tavily: no results array".to_string())?;
let out: Vec<SearchResult> = results
.iter()
.filter_map(|r| {
let url = r.get("url")?.as_str()?.to_string();
let title = r.get("title")?.as_str().unwrap_or("").to_string();
let snippet = r.get("content").and_then(|v| v.as_str()).map(|s| s.to_string());
Some(SearchResult { title, url, snippet })
})
.collect();
Ok(out)
}

97
src-tauri/src/patch.rs Normal file
View File

@ -0,0 +1,97 @@
//! PATCH_FILE engine: sha256, unified diff validation, apply.
use sha2::{Digest, Sha256};
/// SHA256 hex (lowercase) от bytes.
pub fn sha256_hex(bytes: &[u8]) -> String {
let mut hasher = Sha256::new();
hasher.update(bytes);
hex::encode(hasher.finalize())
}
/// Проверка: строка — валидный sha256 hex (64 символа, 0-9a-f).
pub fn is_valid_sha256_hex(s: &str) -> bool {
s.len() == 64 && s.bytes().all(|b| matches!(b, b'0'..=b'9' | b'a'..=b'f'))
}
/// Минимальная проверка unified diff: хотя бы один hunk, желательно ---/+++.
pub fn looks_like_unified_diff(patch: &str) -> bool {
let mut has_hunk = false;
let mut has_minus_file = false;
let mut has_plus_file = false;
for line in patch.lines() {
if line.starts_with("@@") {
has_hunk = true;
}
if line.starts_with("--- ") {
has_minus_file = true;
}
if line.starts_with("+++ ") {
has_plus_file = true;
}
}
has_hunk && ((has_minus_file && has_plus_file) || patch.len() > 40)
}
/// Применяет unified diff к тексту. Возвращает Err("parse_failed") или Err("apply_failed").
pub fn apply_unified_diff_to_text(old_text: &str, patch_text: &str) -> Result<String, &'static str> {
use diffy::{apply, Patch};
let patch = Patch::from_str(patch_text).map_err(|_| "parse_failed")?;
apply(old_text, &patch).map_err(|_| "apply_failed")
}
/// PAPAYU_NORMALIZE_EOL=lf — \r\n→\n, trailing newline.
pub fn normalize_lf_with_trailing_newline(s: &str) -> String {
let mut out = s.replace("\r\n", "\n").replace('\r', "\n");
if !out.is_empty() && !out.ends_with('\n') {
out.push('\n');
}
out
}
#[cfg(test)]
mod tests {
use super::*;
use diffy::create_patch;
#[test]
fn test_sha256_hex() {
let s = "hello";
let h = sha256_hex(s.as_bytes());
assert_eq!(h.len(), 64);
assert!(h.chars().all(|c| c.is_ascii_hexdigit()));
}
#[test]
fn test_is_valid_sha256_hex() {
assert!(is_valid_sha256_hex("a".repeat(64).as_str()));
assert!(is_valid_sha256_hex(&"0".repeat(64)));
assert!(!is_valid_sha256_hex("abc"));
assert!(!is_valid_sha256_hex(&"g".repeat(64)));
}
#[test]
fn test_looks_like_unified_diff() {
let patch = r#"--- a/foo
+++ b/foo
@@ -1,3 +1,4 @@
line1
+line2
line3"#;
assert!(looks_like_unified_diff(patch));
assert!(!looks_like_unified_diff("not a diff"));
}
#[test]
fn test_apply_unified_diff() {
// Используем create_patch для гарантированного формата diffy
let old = "line1\nline3\n";
let new_expected = "line1\nline2\nline3\n";
let patch = create_patch(old, new_expected);
let patch_str = format!("{}", patch);
let applied = apply_unified_diff_to_text(old, &patch_str).unwrap();
assert_eq!(applied, new_expected);
}
}

88
src-tauri/src/protocol.rs Normal file
View File

@ -0,0 +1,88 @@
//! Protocol versioning: v1/v2 default, fallback, env vars.
use std::cell::RefCell;
/// Коды ошибок, при которых v2 fallback на v1 (только для APPLY).
pub const V2_FALLBACK_ERROR_CODES: &[&str] = &[
"ERR_PATCH_APPLY_FAILED",
"ERR_NON_UTF8_FILE",
"ERR_V2_UPDATE_EXISTING_FORBIDDEN",
];
/// Ошибки, для которых сначала repair v2, потом fallback.
pub const V2_REPAIR_FIRST_ERROR_CODES: &[&str] = &[
"ERR_PATCH_APPLY_FAILED",
"ERR_V2_UPDATE_EXISTING_FORBIDDEN",
];
/// Ошибка, для которой fallback сразу (repair бессмысленен).
pub const V2_IMMEDIATE_FALLBACK_ERROR_CODES: &[&str] = &["ERR_NON_UTF8_FILE"];
thread_local! {
static EFFECTIVE_PROTOCOL: RefCell<Option<u32>> = RefCell::new(None);
}
/// Читает PAPAYU_PROTOCOL_DEFAULT, затем PAPAYU_PROTOCOL_VERSION. Default 2.
pub fn protocol_default() -> u32 {
std::env::var("PAPAYU_PROTOCOL_DEFAULT")
.or_else(|_| std::env::var("PAPAYU_PROTOCOL_VERSION"))
.ok()
.and_then(|s| s.trim().parse().ok())
.filter(|v| *v == 1 || *v == 2)
.unwrap_or(2)
}
/// Читает PAPAYU_PROTOCOL_FALLBACK_TO_V1. Default 1 (включён).
pub fn protocol_fallback_enabled() -> bool {
std::env::var("PAPAYU_PROTOCOL_FALLBACK_TO_V1")
.ok()
.map(|s| matches!(s.trim(), "1" | "true" | "yes"))
.unwrap_or(true)
}
/// Эффективная версия: thread-local override → arg override → default.
pub fn protocol_version(override_version: Option<u32>) -> u32 {
if let Some(v) = override_version.filter(|v| *v == 1 || *v == 2) {
return v;
}
EFFECTIVE_PROTOCOL.with(|c| {
if let Some(v) = *c.borrow() {
return v;
}
protocol_default()
})
}
/// Устанавливает версию протокола для текущего потока. Очищается при drop.
pub fn set_protocol_version(version: u32) -> ProtocolVersionGuard {
EFFECTIVE_PROTOCOL.with(|c| {
*c.borrow_mut() = Some(version);
});
ProtocolVersionGuard
}
pub struct ProtocolVersionGuard;
impl Drop for ProtocolVersionGuard {
fn drop(&mut self) {
EFFECTIVE_PROTOCOL.with(|c| {
*c.borrow_mut() = None;
});
}
}
/// Проверяет, нужен ли fallback на v1 при данной ошибке.
/// repair_attempt: 0 = первый retry, 1 = repair уже пробовали.
/// Для ERR_NON_UTF8_FILE — fallback сразу. Для PATCH_APPLY_FAILED и UPDATE_EXISTING_FORBIDDEN — repair сначала.
pub fn should_fallback_to_v1(error_code: &str, repair_attempt: u32) -> bool {
if !V2_FALLBACK_ERROR_CODES.contains(&error_code) {
return false;
}
if V2_IMMEDIATE_FALLBACK_ERROR_CODES.contains(&error_code) {
return true;
}
if V2_REPAIR_FIRST_ERROR_CODES.contains(&error_code) && repair_attempt >= 1 {
return true;
}
false
}

View File

@ -74,6 +74,10 @@ pub fn preflight_actions(root: &Path, actions: &[Action]) -> Result<(), (String,
} }
total_bytes += len; total_bytes += len;
} }
ActionKind::PatchFile => {
files_touched += 1;
total_bytes += a.patch.as_deref().map(|s| s.len() as u64).unwrap_or(0);
}
ActionKind::CreateDir => { ActionKind::CreateDir => {
dirs_created += 1; dirs_created += 1;
} }

View File

@ -210,11 +210,25 @@ pub fn normalize_content_for_write(content: &str, _path: &Path) -> String {
s s
} }
fn protocol_version(override_version: Option<u32>) -> u32 {
crate::protocol::protocol_version(override_version)
}
/// Apply a single action to disk (v2.3.3: for atomic apply + rollback on first failure). /// Apply a single action to disk (v2.3.3: for atomic apply + rollback on first failure).
pub fn apply_one_action(root: &Path, action: &Action) -> Result<(), String> { pub fn apply_one_action(root: &Path, action: &Action, protocol_override: Option<u32>) -> Result<(), String> {
let full = safe_join(root, &action.path)?; let full = safe_join(root, &action.path)?;
match action.kind { match action.kind {
ActionKind::CreateFile | ActionKind::UpdateFile => { ActionKind::CreateFile | ActionKind::UpdateFile => {
// v2: UPDATE_FILE запрещён для существующих файлов
if action.kind == ActionKind::UpdateFile
&& protocol_version(protocol_override) == 2
&& full.is_file()
{
return Err(format!(
"ERR_V2_UPDATE_EXISTING_FORBIDDEN: UPDATE_FILE path '{}' существует. В v2 используй PATCH_FILE.",
action.path
));
}
if let Some(p) = full.parent() { if let Some(p) = full.parent() {
fs::create_dir_all(p).map_err(|e| e.to_string())?; fs::create_dir_all(p).map_err(|e| e.to_string())?;
} }
@ -222,6 +236,9 @@ pub fn apply_one_action(root: &Path, action: &Action) -> Result<(), String> {
let normalized = normalize_content_for_write(content, &full); let normalized = normalize_content_for_write(content, &full);
fs::write(&full, normalized).map_err(|e| e.to_string())?; fs::write(&full, normalized).map_err(|e| e.to_string())?;
} }
ActionKind::PatchFile => {
apply_patch_file_impl(root, &action.path, action)?;
}
ActionKind::CreateDir => { ActionKind::CreateDir => {
fs::create_dir_all(&full).map_err(|e| e.to_string())?; fs::create_dir_all(&full).map_err(|e| e.to_string())?;
} }
@ -239,14 +256,59 @@ pub fn apply_one_action(root: &Path, action: &Action) -> Result<(), String> {
Ok(()) Ok(())
} }
/// Порядок применения: CREATE_DIR → CREATE_FILE/UPDATE_FILE → DELETE_FILE → DELETE_DIR. fn apply_patch_file_impl(root: &Path, path: &str, action: &Action) -> Result<(), String> {
use crate::patch::{
apply_unified_diff_to_text, is_valid_sha256_hex, looks_like_unified_diff,
normalize_lf_with_trailing_newline, sha256_hex,
};
let patch_text = action.patch.as_deref().unwrap_or("");
let base_sha256 = action.base_sha256.as_deref().unwrap_or("");
if !looks_like_unified_diff(patch_text) {
return Err("ERR_PATCH_NOT_UNIFIED: patch is not unified diff".into());
}
if !is_valid_sha256_hex(base_sha256) {
return Err("ERR_BASE_SHA256_INVALID: base_sha256 invalid (64 hex chars)".into());
}
let full = safe_join(root, path)?;
if !full.is_file() {
return Err(format!(
"ERR_BASE_MISMATCH: file not found for PATCH_FILE '{}'",
path
));
}
let old_bytes = fs::read(&full).map_err(|e| format!("ERR_IO: {}", e))?;
let old_sha = sha256_hex(&old_bytes);
if old_sha != base_sha256 {
return Err(format!(
"ERR_BASE_MISMATCH: base mismatch: have {}, want {}",
old_sha, base_sha256
));
}
let old_text = String::from_utf8(old_bytes)
.map_err(|_| String::from("ERR_NON_UTF8_FILE: PATCH_FILE requires utf-8"))?;
let mut new_text = apply_unified_diff_to_text(&old_text, patch_text)
.map_err(|_| String::from("ERR_PATCH_APPLY_FAILED: could not apply patch"))?;
let normalize_eol = std::env::var("PAPAYU_NORMALIZE_EOL")
.map(|s| s.trim().to_lowercase() == "lf")
.unwrap_or(false);
if normalize_eol {
new_text = normalize_lf_with_trailing_newline(&new_text);
}
if let Some(p) = full.parent() {
fs::create_dir_all(p).map_err(|e| e.to_string())?;
}
fs::write(&full, new_text).map_err(|e| e.to_string())
}
/// Порядок применения: CREATE_DIR → CREATE_FILE/UPDATE_FILE → PATCH_FILE → DELETE_FILE → DELETE_DIR.
pub fn sort_actions_for_apply(actions: &mut [Action]) { pub fn sort_actions_for_apply(actions: &mut [Action]) {
fn order(k: &ActionKind) -> u8 { fn order(k: &ActionKind) -> u8 {
match k { match k {
ActionKind::CreateDir => 0, ActionKind::CreateDir => 0,
ActionKind::CreateFile | ActionKind::UpdateFile => 1, ActionKind::CreateFile | ActionKind::UpdateFile => 1,
ActionKind::DeleteFile => 2, ActionKind::PatchFile => 2,
ActionKind::DeleteDir => 3, ActionKind::DeleteFile => 3,
ActionKind::DeleteDir => 4,
} }
} }
actions.sort_by_key(|a| (order(&a.kind), a.path.clone())); actions.sort_by_key(|a| (order(&a.kind), a.path.clone()));
@ -258,7 +320,7 @@ pub fn apply_actions_to_disk(root: &Path, actions: &[Action]) -> Result<(), Stri
let mut sorted: Vec<Action> = actions.to_vec(); let mut sorted: Vec<Action> = actions.to_vec();
sort_actions_for_apply(&mut sorted); sort_actions_for_apply(&mut sorted);
for a in &sorted { for a in &sorted {
apply_one_action(root, a)?; apply_one_action(root, a, None)?;
} }
Ok(()) Ok(())
} }

View File

@ -6,6 +6,12 @@ pub struct Action {
pub path: String, pub path: String,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub content: Option<String>, pub content: Option<String>,
/// v2 PATCH_FILE: unified diff
#[serde(skip_serializing_if = "Option::is_none")]
pub patch: Option<String>,
/// v2 PATCH_FILE: sha256 hex текущей версии файла
#[serde(skip_serializing_if = "Option::is_none")]
pub base_sha256: Option<String>,
} }
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
@ -14,6 +20,7 @@ pub enum ActionKind {
CreateFile, CreateFile,
CreateDir, CreateDir,
UpdateFile, UpdateFile,
PatchFile,
DeleteFile, DeleteFile,
DeleteDir, DeleteDir,
} }
@ -158,6 +165,11 @@ pub struct DiffItem {
/// v2.4.2: BLOCKED — защищённый/не-текстовый файл /// v2.4.2: BLOCKED — защищённый/не-текстовый файл
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub summary: Option<String>, pub summary: Option<String>,
/// v2: bytes до/после для PATCH_FILE (UX)
#[serde(skip_serializing_if = "Option::is_none")]
pub bytes_before: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none")]
pub bytes_after: Option<usize>,
} }
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
@ -314,14 +326,29 @@ pub struct AgentPlan {
/// Собранный контекст для передачи в Apply вместе с plan_json. /// Собранный контекст для передачи в Apply вместе с plan_json.
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub plan_context: Option<String>, pub plan_context: Option<String>,
/// Версия протокола, использованная при генерации (для v1 fallback apply).
#[serde(skip_serializing_if = "Option::is_none")]
pub protocol_version_used: Option<u32>,
/// При ok=false и триггере online fallback: UI вызывает researchAnswer(query).
#[serde(skip_serializing_if = "Option::is_none")]
pub online_fallback_suggested: Option<String>,
/// true — online_context_md был принят и вставлен в prompt.
#[serde(skip_serializing_if = "Option::is_none")]
pub online_context_used: Option<bool>,
} }
/// v3.1: опции применения (auto_check). v2.4.2: user_confirmed для apply_actions_tx. /// v3.1: опции применения (auto_check). v2.4.2: user_confirmed для apply_actions_tx.
/// protocol_version_override: при v1 fallback после v2 APPLY failure.
/// fallback_attempted: true — применяем v1 fallback; при ошибке не повторять fallback.
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ApplyOptions { pub struct ApplyOptions {
pub auto_check: bool, pub auto_check: bool,
#[serde(default)] #[serde(default)]
pub user_confirmed: bool, pub user_confirmed: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub protocol_version_override: Option<u32>,
#[serde(default)]
pub fallback_attempted: bool,
} }
/// v3.1: результат этапа проверки (verify / build / smoke) /// v3.1: результат этапа проверки (verify / build / smoke)
@ -345,6 +372,8 @@ pub struct ApplyTxResult {
pub error: Option<String>, pub error: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub error_code: Option<String>, pub error_code: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub protocol_fallback_stage: Option<String>,
} }
/// v3.2: результат генерации действий из отчёта (generate_actions_from_report) /// v3.2: результат генерации действий из отчёта (generate_actions_from_report)

View File

@ -32,6 +32,8 @@ export interface RunBatchPayload {
export interface ApplyActionsTxOptions { export interface ApplyActionsTxOptions {
auto_check: boolean; auto_check: boolean;
user_confirmed: boolean; user_confirmed: boolean;
protocol_version_override?: number | null;
fallback_attempted?: boolean;
} }
export interface ProjectItem { export interface ProjectItem {
@ -152,6 +154,9 @@ export interface AgentPlanResult {
error_code?: string; error_code?: string;
plan_json?: string; plan_json?: string;
plan_context?: string; plan_context?: string;
protocol_version_used?: number | null;
online_fallback_suggested?: string | null;
online_context_used?: boolean | null;
} }
export async function proposeActions( export async function proposeActions(
@ -161,7 +166,16 @@ export async function proposeActions(
designStyle?: string | null, designStyle?: string | null,
trendsContext?: string | null, trendsContext?: string | null,
lastPlanJson?: string | null, lastPlanJson?: string | null,
lastContext?: string | null lastContext?: string | null,
applyErrorCode?: string | null,
applyErrorValidatedJson?: string | null,
applyRepairAttempt?: number | null,
applyErrorStage?: string | null,
onlineFallbackAttempted?: boolean | null,
onlineContextMd?: string | null,
onlineContextSources?: string[] | null,
onlineFallbackExecuted?: boolean | null,
onlineFallbackReason?: string | null
): Promise<AgentPlanResult> { ): Promise<AgentPlanResult> {
return invoke<AgentPlanResult>("propose_actions", { return invoke<AgentPlanResult>("propose_actions", {
path, path,
@ -171,6 +185,15 @@ export async function proposeActions(
trendsContext: trendsContext ?? null, trendsContext: trendsContext ?? null,
lastPlanJson: lastPlanJson ?? null, lastPlanJson: lastPlanJson ?? null,
lastContext: lastContext ?? null, lastContext: lastContext ?? null,
applyErrorCode: applyErrorCode ?? null,
applyErrorValidatedJson: applyErrorValidatedJson ?? null,
applyRepairAttempt: applyRepairAttempt ?? null,
applyErrorStage: applyErrorStage ?? null,
onlineFallbackAttempted: onlineFallbackAttempted ?? null,
onlineContextMd: onlineContextMd ?? null,
onlineContextSources: onlineContextSources ?? null,
onlineFallbackExecuted: onlineFallbackExecuted ?? null,
onlineFallbackReason: onlineFallbackReason ?? null,
}); });
} }
@ -219,3 +242,26 @@ export async function exportSettings(): Promise<string> {
export async function importSettings(json: string, mode?: "replace" | "merge"): Promise<ImportResult> { export async function importSettings(json: string, mode?: "replace" | "merge"): Promise<ImportResult> {
return invoke<ImportResult>("import_settings", { json, mode: mode ?? "merge" }); return invoke<ImportResult>("import_settings", { json, mode: mode ?? "merge" });
} }
/** Еженедельный отчёт: агрегация трасс и генерация через LLM */
export async function analyzeWeeklyReports(
projectPath: string,
from?: string | null,
to?: string | null
): Promise<import("./types").WeeklyReportResult> {
return invoke("analyze_weekly_reports_cmd", {
projectPath,
from: from ?? null,
to: to ?? null,
});
}
/** Сохранить отчёт в docs/reports/weekly_YYYY-MM-DD.md */
export async function saveReport(projectPath: string, reportMd: string, date?: string | null): Promise<string> {
return invoke("save_report_cmd", { projectPath, reportMd, date: date ?? null });
}
/** Online research: поиск Tavily + fetch + LLM summarize. Требует PAPAYU_ONLINE_RESEARCH=1, PAPAYU_TAVILY_API_KEY. */
export async function researchAnswer(query: string): Promise<import("./types").OnlineAnswer> {
return invoke("research_answer_cmd", { query });
}

View File

@ -73,6 +73,10 @@ export interface AgentPlan {
plan_json?: string; plan_json?: string;
/** Собранный контекст для Apply */ /** Собранный контекст для Apply */
plan_context?: string; plan_context?: string;
/** При ok=false и триггере online fallback: UI вызывает researchAnswer(query) */
online_fallback_suggested?: string | null;
/** true — online_context_md был принят и вставлен в prompt */
online_context_used?: boolean | null;
} }
/** Тренды и рекомендации (мониторинг не реже раз в месяц) */ /** Тренды и рекомендации (мониторинг не реже раз в месяц) */
@ -98,6 +102,7 @@ export interface ApplyTxResult {
checks: { stage: string; ok: boolean; output: string }[]; checks: { stage: string; ok: boolean; output: string }[];
error?: string; error?: string;
error_code?: string; error_code?: string;
protocol_fallback_stage?: string | null;
} }
/** v3.2: результат generate_actions_from_report */ /** v3.2: результат generate_actions_from_report */
@ -205,3 +210,28 @@ export interface Session {
updated_at: string; updated_at: string;
events: SessionEvent[]; events: SessionEvent[];
} }
/** Источник online research */
export interface OnlineSource {
url: string;
title: string;
published_at?: string;
snippet?: string;
}
/** Результат online research */
export interface OnlineAnswer {
answer_md: string;
sources: OnlineSource[];
confidence: number;
notes?: string;
}
/** Результат еженедельного отчёта */
export interface WeeklyReportResult {
ok: boolean;
error?: string;
stats_bundle?: unknown;
llm_report?: unknown;
report_md?: string;
}

View File

@ -20,6 +20,9 @@ import {
fetchTrendsRecommendations, fetchTrendsRecommendations,
exportSettings, exportSettings,
importSettings, importSettings,
analyzeWeeklyReports,
saveReport,
researchAnswer,
} from "@/lib/tauri"; } from "@/lib/tauri";
import { AgenticResult } from "@/pages/tasks/AgenticResult"; import { AgenticResult } from "@/pages/tasks/AgenticResult";
import { useUndoRedo } from "@/pages/tasks/useUndoRedo"; import { useUndoRedo } from "@/pages/tasks/useUndoRedo";
@ -30,6 +33,7 @@ import type {
AnalyzeReport, AnalyzeReport,
ChatMessage, ChatMessage,
DiffItem, DiffItem,
OnlineSource,
ProjectProfile, ProjectProfile,
ApplyTxResult, ApplyTxResult,
AgenticRunRequest, AgenticRunRequest,
@ -92,10 +96,23 @@ export default function Tasks() {
const applyingRef = useRef(false); const applyingRef = useRef(false);
const [requestHistory, setRequestHistory] = useState<{ id: string; title: string; messages: ChatMessage[]; lastPath: string | null; lastReport: AnalyzeReport | null }[]>([]); const [requestHistory, setRequestHistory] = useState<{ id: string; title: string; messages: ChatMessage[]; lastPath: string | null; lastReport: AnalyzeReport | null }[]>([]);
const [trendsModalOpen, setTrendsModalOpen] = useState(false); const [trendsModalOpen, setTrendsModalOpen] = useState(false);
const [weeklyReportModalOpen, setWeeklyReportModalOpen] = useState(false);
const [weeklyReport, setWeeklyReport] = useState<{ reportMd: string; projectPath: string } | null>(null);
const [weeklyReportLoading, setWeeklyReportLoading] = useState(false);
const [selectedRecommendation, setSelectedRecommendation] = useState<TrendsRecommendation | null>(null); const [selectedRecommendation, setSelectedRecommendation] = useState<TrendsRecommendation | null>(null);
const [attachmentMenuOpen, setAttachmentMenuOpen] = useState(false); const [attachmentMenuOpen, setAttachmentMenuOpen] = useState(false);
const [lastPlanJson, setLastPlanJson] = useState<string | null>(null); const [lastPlanJson, setLastPlanJson] = useState<string | null>(null);
const [lastPlanContext, setLastPlanContext] = useState<string | null>(null); const [lastPlanContext, setLastPlanContext] = useState<string | null>(null);
const lastGoalWithOnlineFallbackRef = useRef<string | null>(null);
const [lastOnlineAnswer, setLastOnlineAnswer] = useState<{ answer_md: string; sources: OnlineSource[]; confidence: number } | null>(null);
const [onlineContextPending, setOnlineContextPending] = useState<{ md: string; sources: string[] } | null>(null);
const [onlineAutoUseAsContext, setOnlineAutoUseAsContext] = useState<boolean>(() => {
try {
const stored = localStorage.getItem("papa_yu_online_auto_use_as_context");
if (stored !== null) return stored === "true";
} catch (_) {}
return false;
});
const { undoAvailable, redoAvailable, refreshUndoRedo, handleUndo, handleRedo, setUndoAvailable } = useUndoRedo(lastPath, { const { undoAvailable, redoAvailable, refreshUndoRedo, handleUndo, handleRedo, setUndoAvailable } = useUndoRedo(lastPath, {
setMessages, setMessages,
@ -116,6 +133,12 @@ export default function Tasks() {
})(); })();
}, []); }, []);
useEffect(() => {
try {
localStorage.setItem("papa_yu_online_auto_use_as_context", String(onlineAutoUseAsContext));
} catch (_) {}
}, [onlineAutoUseAsContext]);
useEffect(() => { useEffect(() => {
if (!lastPath) { if (!lastPath) {
setSessions([]); setSessions([]);
@ -534,7 +557,75 @@ export default function Tasks() {
await refreshUndoRedo(); await refreshUndoRedo();
} else { } else {
const code = res.error_code || ""; const code = res.error_code || "";
if (code === "CONFIRM_REQUIRED") { const isBaseShaError = code === "ERR_BASE_MISMATCH" || code === "ERR_BASE_SHA256_INVALID";
const isV2FallbackError = ["ERR_PATCH_APPLY_FAILED", "ERR_NON_UTF8_FILE", "ERR_V2_UPDATE_EXISTING_FORBIDDEN"].includes(code);
const repairFirstErrors = ["ERR_PATCH_APPLY_FAILED", "ERR_V2_UPDATE_EXISTING_FORBIDDEN"];
const canRetry = (isBaseShaError || isV2FallbackError) && lastPlanJson && lastPlanContext;
if (canRetry) {
let repairAttempt = 0;
let lastPlanJsonRetry = lastPlanJson;
let lastPlanContextRetry = lastPlanContext;
let lastErrorCode = code;
let retryRes: ApplyTxResult | null = null;
const maxRetries = repairFirstErrors.includes(code) ? 2 : 1;
for (let attempt = 0; attempt < maxRetries; attempt++) {
const isFallback = repairFirstErrors.includes(lastErrorCode) && repairAttempt >= 1;
setApplyProgressLog((prev) => [
...prev,
isFallback ? "Retry v1 fallback…" : isBaseShaError ? "Retry с repair (base_sha256)…" : "Retry repair…",
]);
try {
const plan = await proposeActions(
path,
lastReportJson ?? "{}",
"ok",
designStyle.trim() || undefined,
undefined,
lastPlanJsonRetry,
lastPlanContextRetry,
lastErrorCode,
lastPlanJsonRetry,
repairAttempt,
"apply",
undefined,
undefined,
undefined,
undefined
);
if (!plan.ok || plan.actions.length === 0) break;
retryRes = await apiApplyActionsTx(path, plan.actions, {
auto_check: autoCheck,
user_confirmed: true,
protocol_version_override: plan.protocol_version_used ?? undefined,
fallback_attempted: plan.protocol_version_used === 1,
});
setApplyResult(retryRes);
setApplyProgressLog((prev) => [...prev, retryRes!.ok ? "Готово." : (retryRes!.error || "Ошибка")]);
if (retryRes.ok) {
setMessages((m) => [
...m,
{ role: "system", text: plan.protocol_version_used === 1 ? "Изменения применены (v1 fallback)." : "Изменения применены (repair). Проверки пройдены." },
]);
setPendingPreview(null);
setPendingActions(null);
setPendingActionIdx({});
await refreshUndoRedo();
break;
}
lastErrorCode = retryRes.error_code || lastErrorCode;
repairAttempt = 1;
if (plan.protocol_version_used === 1) break;
} catch (e) {
setApplyProgressLog((prev) => [...prev, `Retry failed: ${String(e)}`]);
break;
}
}
if (retryRes && !retryRes.ok) {
setMessages((m) => [...m, { role: "system", text: retryRes.error || retryRes.error_code || "Ошибка применения." }]);
} else if (!retryRes) {
setMessages((m) => [...m, { role: "system", text: res.error || res.error_code || "Ошибка применения." }]);
}
} else if (code === "CONFIRM_REQUIRED") {
setMessages((m) => [...m, { role: "system", text: "Подтверждение обязательно перед применением." }]); setMessages((m) => [...m, { role: "system", text: "Подтверждение обязательно перед применением." }]);
} else if (code === "AUTO_CHECK_FAILED_ROLLED_BACK") { } else if (code === "AUTO_CHECK_FAILED_ROLLED_BACK") {
setMessages((m) => [...m, { role: "system", text: "Изменения привели к ошибкам, откат выполнен." }]); setMessages((m) => [...m, { role: "system", text: "Изменения привели к ошибкам, откат выполнен." }]);
@ -772,6 +863,11 @@ export default function Tasks() {
: undefined; : undefined;
} catch (_) {} } catch (_) {}
} }
const pending = onlineContextPending;
if (pending) {
setOnlineContextPending(null);
setLastOnlineAnswer(null);
}
const plan = await proposeActions( const plan = await proposeActions(
pathToUse, pathToUse,
reportToUse, reportToUse,
@ -779,10 +875,94 @@ export default function Tasks() {
designStyle.trim() || undefined, designStyle.trim() || undefined,
trendsContext, trendsContext,
lastPlanJson ?? undefined, lastPlanJson ?? undefined,
lastPlanContext ?? undefined lastPlanContext ?? undefined,
undefined,
undefined,
undefined,
undefined,
lastGoalWithOnlineFallbackRef.current === goal,
pending?.md ?? undefined,
pending?.sources ?? undefined,
!!pending
); );
if (!plan.ok) { if (!plan.ok) {
setMessages((m) => [...m, { role: "assistant", text: plan.error ?? "Ошибка формирования плана" }]); if (plan.online_fallback_suggested) {
const isAutoUse = onlineAutoUseAsContext;
const alreadyAttempted = lastGoalWithOnlineFallbackRef.current === goal;
if (isAutoUse && !alreadyAttempted) {
lastGoalWithOnlineFallbackRef.current = goal;
setMessages((m) => [...m, { role: "assistant", text: plan.error ?? "Ошибка формирования плана" }]);
setMessages((m) => [...m, { role: "system", text: "Онлайн-поиск (auto)…" }]);
try {
const online = await researchAnswer(plan.online_fallback_suggested);
setLastOnlineAnswer({ answer_md: online.answer_md, sources: online.sources ?? [], confidence: online.confidence });
const sourcesLine = online.sources?.length
? "\n\nИсточники:\n" + online.sources.slice(0, 5).map((s) => `${s.title}: ${s.url}`).join("\n")
: "";
setMessages((m) => [...m, { role: "assistant", text: `**Online Research** (confidence: ${(online.confidence * 100).toFixed(0)}%)\n\n${online.answer_md}${sourcesLine}` }]);
setMessages((m) => [...m, { role: "system", text: "Повтор запроса с online context…" }]);
const onlineMd = online.answer_md.slice(0, 8000);
const onlineSources = online.sources.slice(0, 10).map((s) => s.url);
const plan2 = await proposeActions(
pathToUse,
reportToUse,
goal,
designStyle.trim() || undefined,
trendsContext,
lastPlanJson ?? undefined,
lastPlanContext ?? undefined,
undefined,
undefined,
undefined,
undefined,
true,
onlineMd,
onlineSources,
true,
plan.error_code ?? undefined
);
if (!plan2.ok) {
setMessages((m) => [...m, { role: "assistant", text: plan2.error ?? "Ошибка формирования плана после online context" }]);
return;
}
setLastPlanJson(plan2.plan_json ?? null);
setLastPlanContext(plan2.plan_context ?? null);
const summary = plan2.summary || "План от ИИ";
if (plan2.protocol_version_used) {
setMessages((m) => [...m, { role: "assistant", text: `${summary} (protocol v${plan2.protocol_version_used}, online context used)` }]);
} else {
setMessages((m) => [...m, { role: "assistant", text: `${summary} (online context used)` }]);
}
setPendingActions(plan2.actions);
const allIdx: Record<number, boolean> = {};
plan2.actions.forEach((_, i) => { allIdx[i] = true; });
setPendingActionIdx(allIdx);
if (plan2.actions.length) {
setMessages((m) => [...m, { role: "system", text: "Предпросмотр изменений…" }]);
await handlePreview(pathToUse, plan2.actions);
}
} catch (e) {
setMessages((m) => [...m, { role: "assistant", text: `Онлайн-поиск недоступен: ${String(e)}` }]);
}
return;
} else {
lastGoalWithOnlineFallbackRef.current = goal;
setMessages((m) => [...m, { role: "assistant", text: plan.error ?? "Ошибка формирования плана" }]);
setMessages((m) => [...m, { role: "system", text: "Попытка онлайн-поиска…" }]);
try {
const online = await researchAnswer(plan.online_fallback_suggested);
setLastOnlineAnswer({ answer_md: online.answer_md, sources: online.sources ?? [], confidence: online.confidence });
const sourcesLine = online.sources?.length
? "\n\nИсточники:\n" + online.sources.slice(0, 5).map((s) => `${s.title}: ${s.url}`).join("\n")
: "";
setMessages((m) => [...m, { role: "assistant", text: `**Online Research** (confidence: ${(online.confidence * 100).toFixed(0)}%)\n\n${online.answer_md}${sourcesLine}` }]);
} catch (e) {
setMessages((m) => [...m, { role: "assistant", text: `Онлайн-поиск недоступен: ${String(e)}` }]);
}
}
} else {
setMessages((m) => [...m, { role: "assistant", text: plan.error ?? "Ошибка формирования плана" }]);
}
return; return;
} }
// Сохраняем план и контекст для Apply (когда пользователь напишет "ok" или "применяй") // Сохраняем план и контекст для Apply (когда пользователь напишет "ok" или "применяй")
@ -975,6 +1155,49 @@ export default function Tasks() {
<img src="/send-icon.png" alt="" style={{ height: "20px", width: "auto", objectFit: "contain" }} /> <img src="/send-icon.png" alt="" style={{ height: "20px", width: "auto", objectFit: "contain" }} />
Тренды и рекомендации Тренды и рекомендации
</button> </button>
<button
type="button"
onClick={async () => {
const path = lastPath || folderLinks[0];
if (!path) {
setMessages((m) => [...m, { role: "system", text: "Выберите проект для Weekly Report." }]);
return;
}
setWeeklyReportModalOpen(true);
setWeeklyReportLoading(true);
setWeeklyReport(null);
try {
const res = await analyzeWeeklyReports(path);
if (res.ok && res.report_md) {
setWeeklyReport({ reportMd: res.report_md, projectPath: path });
} else {
setWeeklyReport({ reportMd: res.error || "Ошибка генерации отчёта.", projectPath: path });
}
} catch (e) {
setWeeklyReport({ reportMd: String(e), projectPath: path });
} finally {
setWeeklyReportLoading(false);
}
}}
style={{
padding: "10px 14px",
background: "#059669",
color: "#fff",
border: "none",
borderRadius: "var(--radius-md)",
cursor: "pointer",
fontWeight: 600,
fontSize: "13px",
boxShadow: "0 2px 6px rgba(5, 150, 105, 0.3)",
display: "flex",
alignItems: "center",
justifyContent: "center",
gap: "8px",
}}
title="Еженедельный отчёт по телеметрии"
>
Weekly Report
</button>
{displayRequests.length > 0 && ( {displayRequests.length > 0 && (
<div style={{ fontSize: "12px", fontWeight: 600, color: "var(--color-text-muted)", marginBottom: "4px", marginTop: "8px" }}> <div style={{ fontSize: "12px", fontWeight: 600, color: "var(--color-text-muted)", marginBottom: "4px", marginTop: "8px" }}>
Запросы Запросы
@ -1420,6 +1643,62 @@ export default function Tasks() {
<p style={{ margin: "10px 0 0 0" }}>3. После изменений нажмите «Проверить целостность» для автоматической проверки типов, сборки и тестов.</p> <p style={{ margin: "10px 0 0 0" }}>3. После изменений нажмите «Проверить целостность» для автоматической проверки типов, сборки и тестов.</p>
</div> </div>
)} )}
{lastOnlineAnswer && (
<div style={{ marginBottom: "16px", padding: "14px", background: "#f0fdf4", borderRadius: "var(--radius-md)", border: "1px solid #86efac" }}>
<div style={{ display: "flex", alignItems: "center", justifyContent: "space-between", marginBottom: "8px" }}>
<div style={{ fontWeight: 600, color: "#166534" }}>Online Research</div>
{onlineAutoUseAsContext && (
<span style={{ fontSize: "12px", color: "#16a34a", fontWeight: 500, background: "#dcfce7", padding: "2px 8px", borderRadius: "4px" }}>Auto-used </span>
)}
</div>
<div style={{ fontSize: "14px", whiteSpace: "pre-wrap", wordBreak: "break-word", marginBottom: "10px" }}>{lastOnlineAnswer.answer_md}</div>
{lastOnlineAnswer.sources?.length ? (
<div style={{ marginBottom: "10px", fontSize: "13px" }}>
<span style={{ fontWeight: 500, color: "#64748b" }}>Источники:</span>
<ul style={{ margin: "4px 0 0 0", paddingLeft: "20px" }}>
{lastOnlineAnswer.sources.slice(0, 8).map((s, j) => (
<li key={j}>
<a href={s.url} target="_blank" rel="noopener noreferrer" style={{ color: "#2563eb" }}>{s.title || s.url}</a>
</li>
))}
</ul>
</div>
) : null}
<div style={{ display: "flex", gap: "8px", flexWrap: "wrap" }}>
{!onlineAutoUseAsContext && (
<button
type="button"
onClick={() => {
setOnlineContextPending({ md: lastOnlineAnswer!.answer_md, sources: lastOnlineAnswer!.sources?.map((s) => s.url).filter(Boolean) ?? [] });
setMessages((m) => [...m, { role: "system", text: "Online Research будет использован в следующем запросе." }]);
}}
style={{ padding: "6px 12px", fontSize: "13px", background: "#166534", color: "#fff", border: "none", borderRadius: "6px", cursor: "pointer", fontWeight: 500 }}
>
Use as context (once)
</button>
)}
<button
type="button"
onClick={() => { navigator.clipboard.writeText(lastOnlineAnswer!.answer_md); }}
style={{ padding: "6px 12px", fontSize: "13px", background: "#e2e8f0", border: "none", borderRadius: "6px", cursor: "pointer", fontWeight: 500 }}
>
Copy answer
</button>
{onlineAutoUseAsContext && (
<button
type="button"
onClick={() => {
setOnlineAutoUseAsContext(false);
setMessages((m) => [...m, { role: "system", text: "Auto-use отключён для текущего проекта." }]);
}}
style={{ padding: "6px 12px", fontSize: "13px", background: "#f87171", color: "#fff", border: "none", borderRadius: "6px", cursor: "pointer", fontWeight: 500 }}
>
Disable auto-use
</button>
)}
</div>
</div>
)}
{messages.length > 0 && messages.map((msg, i) => ( {messages.length > 0 && messages.map((msg, i) => (
<div key={i} style={{ marginBottom: "16px", padding: "12px 14px", background: msg.role === "assistant" ? "#f8fafc" : msg.role === "system" ? "#f1f5f9" : "transparent", borderRadius: "var(--radius-md)", border: msg.role === "assistant" ? "1px solid #e2e8f0" : "none" }}> <div key={i} style={{ marginBottom: "16px", padding: "12px 14px", background: msg.role === "assistant" ? "#f8fafc" : msg.role === "system" ? "#f1f5f9" : "transparent", borderRadius: "var(--radius-md)", border: msg.role === "assistant" ? "1px solid #e2e8f0" : "none" }}>
<div style={{ display: "flex", alignItems: "flex-start", justifyContent: "space-between", gap: "10px", flexWrap: "wrap" }}> <div style={{ display: "flex", alignItems: "flex-start", justifyContent: "space-between", gap: "10px", flexWrap: "wrap" }}>
@ -2063,6 +2342,65 @@ export default function Tasks() {
</div> </div>
)} )}
{weeklyReportModalOpen && (
<div
style={{
position: "fixed",
inset: 0,
background: "rgba(0,0,0,0.4)",
display: "flex",
alignItems: "center",
justifyContent: "center",
zIndex: 9998,
}}
onClick={(e) => e.target === e.currentTarget && setWeeklyReportModalOpen(false)}
>
<div
style={{
background: "#fff",
borderRadius: "var(--radius-xl)",
boxShadow: "0 20px 60px rgba(0,0,0,0.2)",
maxWidth: 680,
width: "90%",
maxHeight: "85vh",
display: "flex",
flexDirection: "column",
overflow: "hidden",
}}
onClick={(e) => e.stopPropagation()}
>
<div style={{ padding: "16px 20px", borderBottom: "1px solid var(--color-border)", fontWeight: 700, fontSize: "16px", color: "#059669", display: "flex", justifyContent: "space-between", alignItems: "center", flexWrap: "wrap", gap: "8px" }}>
Weekly Report
<div style={{ display: "flex", gap: "8px", alignItems: "center" }}>
{weeklyReport && !weeklyReportLoading && !weeklyReport.reportMd.startsWith("Ошибка") && (
<button
type="button"
onClick={async () => {
if (!weeklyReport) return;
try {
const path = await saveReport(weeklyReport.projectPath, weeklyReport.reportMd);
setMessages((m) => [...m, { role: "system", text: `Отчёт сохранён: ${path}` }]);
} catch (e) {
setMessages((m) => [...m, { role: "system", text: `Ошибка сохранения: ${String(e)}` }]);
}
}}
style={{ padding: "6px 12px", background: "#059669", color: "#fff", border: "none", borderRadius: "8px", fontWeight: 600, cursor: "pointer" }}
>
Сохранить отчёт
</button>
)}
<button type="button" onClick={() => setWeeklyReportModalOpen(false)} style={{ padding: "6px 12px", background: "#e2e8f0", border: "none", borderRadius: "8px", fontWeight: 600 }}>Закрыть</button>
</div>
</div>
<div style={{ padding: "16px 20px", overflowY: "auto", flex: 1, whiteSpace: "pre-wrap", fontFamily: "var(--font-mono, monospace)", fontSize: "13px", lineHeight: 1.6 }}>
{weeklyReportLoading && <p style={{ color: "var(--color-text-muted)" }}>Собираю трассы и генерирую отчёт</p>}
{weeklyReport && !weeklyReportLoading && <pre style={{ margin: 0, whiteSpace: "pre-wrap", wordBreak: "break-word" }}>{weeklyReport.reportMd}</pre>}
{!weeklyReport && !weeklyReportLoading && <p style={{ color: "var(--color-text-muted)" }}>Нет данных.</p>}
</div>
</div>
</div>
)}
</main> </main>
</div> </div>
); );