This commit implements fully automatic injection of online research results into the LLM prompt without user clicks.
## Backend
### Environment Variables
- Added `PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1` (default: 0) to enable automatic injection of online research results into subsequent `proposeActions` calls.
- Added `is_online_auto_use_as_context()` helper function in `online_research/mod.rs`.
### Command Changes
- **`propose_actions` command**: Added `online_fallback_reason: Option<String>` parameter to track the error code that triggered online fallback.
- **`llm_planner::plan` function**: Added `online_fallback_reason: Option<&str>` parameter for tracing.
- **Trace Enhancements**: Added `online_fallback_reason` field to trace when `online_fallback_executed` is true.
### Module Exports
- Made `extract_error_code_prefix` public in `online_research/fallback.rs` for frontend use.
## Frontend
### Project Settings
- Added `onlineAutoUseAsContext` state (persisted in `localStorage` as `papa_yu_online_auto_use_as_context`).
- Initialized from localStorage or defaults to `false`.
- Auto-saved to localStorage on change.
### Auto-Chain Flow
- When `plan.ok === false` and `plan.online_fallback_suggested` is present:
- If `onlineAutoUseAsContext === true` and not already attempted for this goal (cycle protection via `lastGoalWithOnlineFallbackRef`):
- Automatically calls `researchAnswer(query)`.
- Truncates result to `8000` chars and `10` sources (frontend-side limits).
- Immediately calls `proposeActions` again with:
- `online_context_md`
- `online_context_sources`
- `online_fallback_executed: true`
- `online_fallback_reason: error_code`
- `online_fallback_attempted: true`
- Displays the new plan/error without requiring "Use as context" button click.
- If `onlineAutoUseAsContext === false` or already attempted:
- Falls back to manual mode (shows online research block with "Use as context (once)" button).
### Cycle Protection
- `lastGoalWithOnlineFallbackRef` tracks the last goal that triggered online fallback.
- If the same goal triggers fallback again, auto-chain is skipped to prevent infinite loops.
- Maximum 1 auto-chain per user query.
### UI Enhancements
- **Online Research Block**:
- When `onlineAutoUseAsContext === true`: displays "Auto-used ✓" badge.
- Hides "Use as context (once)" button when auto-use is enabled.
- Adds "Disable auto-use" button (red) to disable auto-use for the current project.
- When disabled, shows system message: "Auto-use отключён для текущего проекта."
### API Updates
- **`proposeActions` in `tauri.ts`**: Added `onlineFallbackReason?: string | null` parameter.
## Tests
- **`online_context_auto_test.rs`**: Added unit tests for:
- `test_is_online_auto_use_disabled_by_default`
- `test_is_online_auto_use_enabled_when_set`
- `test_extract_error_code_prefix_timeout`
- `test_extract_error_code_prefix_schema`
- `test_extract_error_code_prefix_empty_when_no_prefix`
All tests pass.
## Documentation
### README.md
- Added "Auto-use (X4)" subsection under "Online Research":
- Describes `PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1` env var (default: 0).
- Explains cycle protection: maximum 1 auto-chain per goal.
- Documents UI behavior: "Auto-used ✓" badge and "Disable auto-use" button.
## Behavior Summary
**Without auto-use (default):**
1. `proposeActions` → error + `online_fallback_suggested`
2. UI calls `researchAnswer`
3. UI displays online research block with "Use as context (once)" button
4. User clicks button → sets `onlineContextPending` → next `proposeActions` includes context
**With auto-use enabled (`PAPAYU_ONLINE_AUTO_USE_AS_CONTEXT=1`):**
1. `proposeActions` → error + `online_fallback_suggested`
2. UI calls `researchAnswer` automatically
3. UI displays online research block with "Auto-used ✓" badge
4. UI immediately calls `proposeActions` again with online context → displays new plan
5. If still fails → no retry (cycle protection)
## Build Status
- ✅ Backend: `cargo build --lib` (2 warnings about unused code for future features)
- ✅ Frontend: `npm run build`
- ✅ Tests: `cargo test online_context_auto_test --lib` (5 passed)
Co-authored-by: Cursor <cursoragent@cursor.com>
|
||
|---|---|---|
| .. | ||
| v1 | ||
| v2 | ||
| README.md | ||
Golden traces — эталонные артефакты
Фиксируют детерминированные результаты papa-yu без зависимости от LLM. Позволяют ловить регрессии в валидации, парсинге, диете, кеше.
Структура
docs/golden_traces/
README.md
v1/ # Protocol v1 fixtures
001_fix_bug_plan.json
002_fix_bug_apply.json
...
v2/ # Protocol v2 fixtures (PATCH_FILE, base_sha256)
001_fix_bug_plan.json
002_fix_bug_apply_patch.json
003_base_mismatch_block.json
004_patch_apply_failed_block.json
005_no_changes_apply.json
Формат fixture (без секретов)
Минимальный стабильный JSON:
protocol— schema_version, schema_hashrequest— mode, input_chars, token_budget, strict_json, provider, modelcontext— context_digest (опц.), context_stats, cache_statsresult— validated_json (объект), validation_outcome, error_code
Без raw_content, без секретов.
Генерация из трасс
cd src-tauri
cargo run --bin trace_to_golden -- <trace_id> [output_path]
cargo run --bin trace_to_golden -- <path/to/trace.json> [output_path]
Читает trace из .papa-yu/traces/<trace_id>.json или из файла. Пишет в docs/golden_traces/v1/.
Регрессионный тест
cargo test golden_traces_v1_validate golden_traces_v2_validate
# или
make test-protocol
npm run test-protocol
Политика обновления golden traces
Когда обновлять: только при намеренном изменении протокола или валидатора (path/content/conflicts, schema, диета).
Как обновлять: trace_to_golden — make golden (из последней трассы) или make golden TRACE_ID=<id>.
Как добавлять новый сценарий: выполни propose с PAPAYU_TRACE=1, затем make golden и сохрани вывод в v1/NNN_<name>.json с номером NNN.
При смене schema_hash: либо bump schema_version (новый документ v2), либо обнови все fixtures (trace_to_golden на свежие трассы) и зафиксируй в CHANGELOG.