major update
Some checks failed
Deploy / lint (push) Failing after 8s
Deploy / test (push) Has been skipped
Deploy / deploy (push) Has been skipped

This commit is contained in:
Dennis Thiessen
2026-02-27 16:08:09 +01:00
parent 61ab24490d
commit 181cfe6588
71 changed files with 7647 additions and 281 deletions

116
.kiro/settings/mcp.json Normal file
View File

@@ -0,0 +1,116 @@
{
"mcpServers": {
"context7": {
"gallery": true,
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp@latest"
],
"env": {
"HTTP_PROXY": "http://aproxy.corproot.net:8080",
"HTTPS_PROXY": "http://aproxy.corproot.net:8080"
},
"type": "stdio"
},
"aws.mcp": {
"command": "uvx",
"timeout": 100000,
"transport": "stdio",
"args": [
"mcp-proxy-for-aws@latest",
"https://aws-mcp.us-east-1.api.aws/mcp"
],
"env": {
"AWS_PROFILE": "409330224121_sc-ps-standard-admin",
"AWS_REGION": "eu-central-2",
"HTTP_PROXY": "http://aproxy.corproot.net:8080",
"HTTPS_PROXY": "http://aproxy.corproot.net:8080",
"SSL_CERT_FILE": "/Users/taathde3/combined-ca-bundle.pem",
"REQUESTS_CA_BUNDLE": "/Users/taathde3/combined-ca-bundle.pem"
},
"disabled": false,
"autoApprove": []
},
"aws.eks.mcp": {
"command": "uvx",
"timeout": 100000,
"transport": "stdio",
"args": [
"mcp-proxy-for-aws@latest",
"https://eks-mcp.eu-central-1.api.aws/mcp",
"--service",
"eks-mcp"
],
"env": {
"AWS_PROFILE": "409330224121_sc-ps-standard-admin",
"AWS_REGION": "eu-central-2",
"HTTP_PROXY": "http://aproxy.corproot.net:8080",
"HTTPS_PROXY": "http://aproxy.corproot.net:8080",
"SSL_CERT_FILE": "/Users/taathde3/combined-ca-bundle.pem",
"REQUESTS_CA_BUNDLE": "/Users/taathde3/combined-ca-bundle.pem"
},
"disabled": false,
"autoApprove": []
},
"aws.ecs.mcp": {
"command": "uvx",
"timeout": 100000,
"transport": "stdio",
"args": [
"mcp-proxy-for-aws@latest",
"https://ecs-mcp.us-east-1.api.aws/mcp",
"--service",
"ecs-mcp"
],
"env": {
"AWS_PROFILE": "409330224121_sc-ps-standard-admin",
"AWS_REGION": "eu-central-2",
"HTTP_PROXY": "http://aproxy.corproot.net:8080",
"HTTPS_PROXY": "http://aproxy.corproot.net:8080",
"SSL_CERT_FILE": "/Users/taathde3/combined-ca-bundle.pem",
"REQUESTS_CA_BUNDLE": "/Users/taathde3/combined-ca-bundle.pem"
},
"disabled": false,
"autoApprove": []
},
"iaws.support.agent": {
"command": "uvx",
"args": [
"mcp-proxy-for-aws@latest",
"https://bedrock-agentcore.eu-central-1.amazonaws.com/runtimes/arn%3Aaws%3Abedrock-agentcore%3Aeu-central-1%3A228864602806%3Aruntime%2Fiaws_support_agent-NvMQxHFf9P/invocations?qualifier=DEFAULT",
"--metadata",
"AWS_REGION=eu-central-1"
],
"env": {
"AWS_PROFILE": "409330224121_sc-ps-standard-admin",
"AWS_REGION": "eu-central-2",
"HTTP_PROXY": "http://aproxy.corproot.net:8080",
"HTTPS_PROXY": "http://aproxy.corproot.net:8080",
"SSL_CERT_FILE": "/Users/taathde3/combined-ca-bundle.pem",
"REQUESTS_CA_BUNDLE": "/Users/taathde3/combined-ca-bundle.pem"
},
"disabled": false
},
"iaws.platform.agent": {
"command": "uvx",
"args": [
"mcp-proxy-for-aws@latest",
"https://bedrock-agentcore.eu-central-1.amazonaws.com/runtimes/arn%3Aaws%3Abedrock-agentcore%3Aeu-central-1%3A228864602806%3Aruntime%2Fiaws_platform_agent-jxCudsFEFj/invocations?qualifier=DEFAULT",
"--metadata",
"AWS_REGION=eu-central-1"
],
"env": {
"AWS_PROFILE": "409330224121_sc-ps-standard-admin",
"AWS_REGION": "eu-central-2",
"HTTP_PROXY": "http://aproxy.corproot.net:8080",
"HTTPS_PROXY": "http://aproxy.corproot.net:8080",
"SSL_CERT_FILE": "/Users/taathde3/combined-ca-bundle.pem",
"REQUESTS_CA_BUNDLE": "/Users/taathde3/combined-ca-bundle.pem"
},
"disabled": false
}
},
"inputs": []
}

View File

@@ -0,0 +1 @@
{"specId": "9b39d94f-51e1-42d3-bacc-68eb3961f2b1", "workflowType": "requirements-first", "specType": "feature"}

View File

@@ -0,0 +1,502 @@
# Design Document — Dashboard Enhancements
## Overview
This design covers four enhancements to the TickerDetailPage dashboard:
1. **Sentiment drill-down** — Store OpenAI reasoning text and web search citations in the DB; expose via API; render in an expandable detail section within SentimentPanel.
2. **Fundamentals drill-down** — Track which FMP endpoints returned 402 (paid-plan-required) and surface those reasons in the API and an expandable detail section within FundamentalsPanel.
3. **TradingView-style chart** — Add mouse-wheel zoom, click-drag pan, and a crosshair overlay with price/date labels to the existing canvas-based CandlestickChart.
4. **S/R clustering** — Cluster nearby S/R levels into zones with aggregated strength, filter to top N, and render as shaded rectangles instead of dashed lines.
All changes are additive to existing components and preserve the glassmorphism UI style.
## Architecture
```mermaid
graph TD
subgraph Backend
OAI[OpenAI Responses API] -->|reasoning + annotations| SP[OpenAISentimentProvider]
SP -->|SentimentData + reasoning + citations| SS[sentiment_service]
SS -->|persist| DB[(PostgreSQL)]
FMP[FMP Stable API] -->|402 metadata| FP[FMPFundamentalProvider]
FP -->|FundamentalData + unavailable_fields| FS[fundamental_service]
FS -->|persist| DB
DB -->|query| SRS[sr_service]
SRS -->|cluster_sr_zones| SRS
SRS -->|SRZone list| SRAPI[/sr-levels endpoint/]
end
subgraph Frontend
SRAPI -->|zones JSON| Chart[CandlestickChart]
Chart -->|canvas render| ZR[Zone rectangles + crosshair + zoom/pan]
SS -->|API| SentAPI[/sentiment endpoint/]
SentAPI -->|reasoning + citations| SPan[SentimentPanel]
SPan -->|expand/collapse| DD1[Detail Section]
FS -->|API| FundAPI[/fundamentals endpoint/]
FundAPI -->|unavailable_fields| FPan[FundamentalsPanel]
FPan -->|expand/collapse| DD2[Detail Section]
end
```
The changes touch four layers:
- **Provider layer** — Extract additional data from external API responses (OpenAI annotations, FMP 402 reasons).
- **Service layer** — Store new fields, add zone clustering logic.
- **API/Schema layer** — Extend response schemas with new fields.
- **Frontend components** — Add interactive chart features and expandable detail sections.
## Components and Interfaces
### 1. Sentiment Provider Changes (`app/providers/openai_sentiment.py`)
The `SentimentData` DTO gains two optional fields:
```python
@dataclass(frozen=True, slots=True)
class SentimentData:
ticker: str
classification: str
confidence: int
source: str
timestamp: datetime
reasoning: str = "" # NEW
citations: list[dict[str, str]] = field(default_factory=list) # NEW: [{"url": ..., "title": ...}]
```
The provider already parses `reasoning` from the JSON response but discards it. The change:
- Return `reasoning` from the parsed JSON in the `SentimentData`.
- Iterate `response.output` items looking for `type == "web_search_call"` output items, then extract URL annotations from the subsequent message content blocks that have `annotations` with `type == "url_citation"`. Each annotation yields `{"url": annotation.url, "title": annotation.title}`.
- If no annotations exist, return an empty list (no error).
### 2. Sentiment DB Model Changes (`app/models/sentiment.py`)
Add two columns to `SentimentScore`:
```python
reasoning: Mapped[str] = mapped_column(Text, nullable=False, default="")
citations_json: Mapped[str] = mapped_column(Text, nullable=False, default="[]")
```
Citations are stored as a JSON-encoded string (list of `{url, title}` dicts). This avoids a separate table for a simple list of links.
Alembic migration adds these two columns with defaults so existing rows are unaffected.
### 3. Sentiment Service Changes (`app/services/sentiment_service.py`)
`store_sentiment()` gains `reasoning: str` and `citations: list[dict]` parameters. It serializes citations to JSON and stores both fields.
### 4. Sentiment Schema Changes (`app/schemas/sentiment.py`)
```python
class CitationItem(BaseModel):
url: str
title: str
class SentimentScoreResult(BaseModel):
id: int
classification: Literal["bullish", "bearish", "neutral"]
confidence: int = Field(ge=0, le=100)
source: str
timestamp: datetime
reasoning: str = "" # NEW
citations: list[CitationItem] = [] # NEW
```
### 5. FMP Provider Changes (`app/providers/fmp.py`)
`FundamentalData` DTO gains an `unavailable_fields` dict:
```python
@dataclass(frozen=True, slots=True)
class FundamentalData:
ticker: str
pe_ratio: float | None
revenue_growth: float | None
earnings_surprise: float | None
market_cap: float | None
fetched_at: datetime
unavailable_fields: dict[str, str] = field(default_factory=dict) # NEW: {"pe_ratio": "requires paid plan", ...}
```
In `_fetch_json_optional`, when a 402 is received, the provider records which fields map to that endpoint. The mapping:
- `ratios-ttm``pe_ratio`
- `financial-growth``revenue_growth`
- `earnings``earnings_surprise`
After all fetches, any field that is `None` AND whose endpoint returned 402 gets an entry in `unavailable_fields`.
### 6. Fundamentals DB Model Changes (`app/models/fundamental.py`)
Add one column:
```python
unavailable_fields_json: Mapped[str] = mapped_column(Text, nullable=False, default="{}")
```
Stored as JSON-encoded `{"field_name": "reason"}` dict.
### 7. Fundamentals Schema Changes (`app/schemas/fundamental.py`)
```python
class FundamentalResponse(BaseModel):
symbol: str
pe_ratio: float | None = None
revenue_growth: float | None = None
earnings_surprise: float | None = None
market_cap: float | None = None
fetched_at: datetime | None = None
unavailable_fields: dict[str, str] = {} # NEW
```
### 8. S/R Zone Clustering (`app/services/sr_service.py`)
New function `cluster_sr_zones()`:
```python
def cluster_sr_zones(
levels: list[dict],
current_price: float,
tolerance: float = 0.02, # 2% default clustering tolerance
max_zones: int | None = None,
) -> list[dict]:
"""Cluster nearby S/R levels into zones.
Returns list of zone dicts:
{
"low": float,
"high": float,
"midpoint": float,
"strength": int, # sum of constituent strengths, capped at 100
"type": "support" | "resistance",
"level_count": int,
}
"""
```
Algorithm:
1. Sort levels by `price_level` ascending.
2. Greedy merge: walk sorted levels; if the next level is within `tolerance` (percentage of the current cluster midpoint) of the current cluster, merge it in. Otherwise, start a new cluster.
3. For each cluster: `low` = min price, `high` = max price, `midpoint` = (low + high) / 2, `strength` = sum of constituent strengths capped at 100.
4. Tag each zone as `"support"` if midpoint < current_price, else `"resistance"`.
5. Sort by strength descending.
6. If `max_zones` is set, return only the top N.
### 9. S/R Schema Changes (`app/schemas/sr_level.py`)
```python
class SRZoneResult(BaseModel):
low: float
high: float
midpoint: float
strength: int = Field(ge=0, le=100)
type: Literal["support", "resistance"]
level_count: int
class SRLevelResponse(BaseModel):
symbol: str
levels: list[SRLevelResult]
zones: list[SRZoneResult] = [] # NEW
count: int
```
### 10. S/R Router Changes (`app/routers/sr_levels.py`)
Add `max_zones` query parameter (default 6). After fetching levels, call `cluster_sr_zones()` and include zones in the response.
### 11. CandlestickChart Enhancements (`frontend/src/components/charts/CandlestickChart.tsx`)
State additions:
- `visibleRange: { start: number, end: number }` — indices into the data array for the currently visible window.
- `isPanning: boolean`, `panStartX: number` — for drag-to-pan.
- `crosshair: { x: number, y: number } | null` — cursor position for crosshair rendering.
New event handlers:
- `onWheel` — Adjust `visibleRange` (zoom in = narrow range, zoom out = widen range). Clamp to min 10 bars, max full dataset.
- `onMouseDown` / `onMouseMove` / `onMouseUp` — When zoomed in, click-drag pans the visible range left/right.
- `onMouseMove` (extended) — Track cursor position for crosshair. Draw vertical + horizontal lines and axis labels.
- `onMouseLeave` — Clear crosshair state.
The `draw()` function changes:
- Use `data.slice(visibleRange.start, visibleRange.end)` instead of full `data`.
- After drawing candles, if `crosshair` is set, draw crosshair lines and labels.
- Replace S/R dashed lines with shaded zone rectangles when `zones` prop is provided.
New prop: `zones?: SRZone[]` (from the API response).
### 12. SentimentPanel Drill-Down (`frontend/src/components/ticker/SentimentPanel.tsx`)
Add `useState<boolean>(false)` for expand/collapse. When expanded, render:
- Reasoning text in a `<p>` block.
- Citations as a list of `<a>` links with title and URL.
Toggle button uses a chevron icon below the summary metrics.
### 13. FundamentalsPanel Drill-Down (`frontend/src/components/ticker/FundamentalsPanel.tsx`)
Add `useState<boolean>(false)` for expand/collapse. Changes:
- When a metric is `null` and `unavailable_fields[field_name]` exists, show the reason text (e.g., "Requires paid plan") in amber instead of "—".
- When expanded, show data source name ("FMP"), fetch timestamp, and a list of unavailable fields with reasons.
### 14. Frontend Type Updates (`frontend/src/lib/types.ts`)
```typescript
// Sentiment additions
export interface CitationItem {
url: string;
title: string;
}
export interface SentimentScore {
id: number;
classification: 'bullish' | 'bearish' | 'neutral';
confidence: number;
source: string;
timestamp: string;
reasoning: string; // NEW
citations: CitationItem[]; // NEW
}
// Fundamentals additions
export interface FundamentalResponse {
symbol: string;
pe_ratio: number | null;
revenue_growth: number | null;
earnings_surprise: number | null;
market_cap: number | null;
fetched_at: string | null;
unavailable_fields: Record<string, string>; // NEW
}
// S/R Zone
export interface SRZone {
low: number;
high: number;
midpoint: number;
strength: number;
type: 'support' | 'resistance';
level_count: number;
}
export interface SRLevelResponse {
symbol: string;
levels: SRLevel[];
zones: SRZone[]; // NEW
count: number;
}
```
## Data Models
### Database Schema Changes
#### `sentiment_scores` table — new columns
| Column | Type | Default | Description |
|--------|------|---------|-------------|
| `reasoning` | TEXT | `""` | AI reasoning text from OpenAI response |
| `citations_json` | TEXT | `"[]"` | JSON array of `{url, title}` citation objects |
#### `fundamental_data` table — new column
| Column | Type | Default | Description |
|--------|------|---------|-------------|
| `unavailable_fields_json` | TEXT | `"{}"` | JSON dict of `{field_name: reason}` for missing data |
No new tables are needed. The S/R zones are computed on-the-fly from existing `sr_levels` rows — they are not persisted.
### Alembic Migration
A single migration file adds the three new columns with server defaults so existing rows are populated automatically:
```python
op.add_column('sentiment_scores', sa.Column('reasoning', sa.Text(), server_default='', nullable=False))
op.add_column('sentiment_scores', sa.Column('citations_json', sa.Text(), server_default='[]', nullable=False))
op.add_column('fundamental_data', sa.Column('unavailable_fields_json', sa.Text(), server_default='{}', nullable=False))
```
## Correctness Properties
*A property is a characteristic or behavior that should hold true across all valid executions of a system — essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.*
### Property 1: Sentiment reasoning extraction
*For any* valid OpenAI Responses API response containing a JSON body with a `reasoning` field, the `OpenAISentimentProvider.fetch_sentiment()` method should return a `SentimentData` whose `reasoning` field equals the reasoning string from the parsed JSON.
**Validates: Requirements 1.1**
### Property 2: Sentiment citations extraction
*For any* valid OpenAI Responses API response containing zero or more `url_citation` annotations across its output items, the `OpenAISentimentProvider.fetch_sentiment()` method should return a `SentimentData` whose `citations` list contains exactly the URLs and titles from those annotations, in order. When no annotations exist, the citations list should be empty (no error raised).
**Validates: Requirements 1.2, 1.4**
### Property 3: Sentiment data round-trip
*For any* sentiment record with arbitrary reasoning text and citations list, storing it via `store_sentiment()` and then retrieving it via the `/sentiment/{symbol}` API endpoint should return a response where the latest score's `reasoning` and `citations` fields match the originally stored values.
**Validates: Requirements 1.3**
### Property 4: Expanded sentiment detail displays all data
*For any* `SentimentScore` with non-empty reasoning and a list of citations, when the SentimentPanel detail section is expanded, the rendered output should contain the reasoning text and every citation's title and URL as clickable links.
**Validates: Requirements 2.2, 2.3**
### Property 5: Sentiment detail collapse hides content
*For any* SentimentPanel state where the detail section is expanded, collapsing it should result in the reasoning text and citations being hidden from the DOM while the summary metrics (classification, confidence, dimension score, source count) remain visible.
**Validates: Requirements 2.4**
### Property 6: FMP 402 reason recording
*For any* subset of supplementary FMP endpoints (ratios-ttm, financial-growth, earnings) that return HTTP 402, the `FMPFundamentalProvider.fetch_fundamentals()` method should return a `FundamentalData` whose `unavailable_fields` dict contains an entry for each corresponding metric field name with the reason "requires paid plan".
**Validates: Requirements 3.1**
### Property 7: Fundamentals unavailable_fields round-trip
*For any* fundamental data record with an arbitrary `unavailable_fields` dict, storing it via `store_fundamental()` and retrieving it via the `/fundamentals/{symbol}` API endpoint should return a response whose `unavailable_fields` matches the originally stored dict.
**Validates: Requirements 3.2**
### Property 8: Null field display depends on reason existence
*For any* `FundamentalResponse` where a metric field is null, the FundamentalsPanel should display the reason text from `unavailable_fields` (if present for that field) or a dash character "—" (if no reason exists for that field).
**Validates: Requirements 3.3, 3.4**
### Property 9: Fundamentals expanded detail content
*For any* `FundamentalResponse` with a fetch timestamp and unavailable fields, when the FundamentalsPanel detail section is expanded, the rendered output should contain the data source name, the formatted fetch timestamp, and each unavailable field's name and reason.
**Validates: Requirements 4.2**
### Property 10: Zoom adjusts visible range proportionally
*For any* dataset of length N (N ≥ 10) and any current visible range [start, end], applying a positive wheel delta (zoom in) should produce a new range that is strictly narrower (fewer bars), and applying a negative wheel delta (zoom out) should produce a new range that is strictly wider (more bars), unless already at the limit.
**Validates: Requirements 5.1, 5.2, 5.3**
### Property 11: Pan shifts visible range
*For any* dataset and any visible range that does not cover the full dataset, a horizontal drag of Δx pixels should shift the visible range start and end indices by a proportional amount in the corresponding direction, without changing the range width.
**Validates: Requirements 5.4**
### Property 12: Zoom range invariant
*For any* sequence of zoom and pan operations on a dataset of length N, the visible range should always satisfy: `end - start >= 10` AND `end - start <= N` AND `start >= 0` AND `end <= N`.
**Validates: Requirements 5.5**
### Property 13: Coordinate-to-value mapping
*For any* chart configuration with a visible price range [lo, hi] and visible data slice, the `yToPrice` function should map any y-coordinate within the chart area to a price within [lo, hi], and the `xToBarIndex` function should map any x-coordinate within the chart area to a valid index within the visible data slice.
**Validates: Requirements 6.3, 6.4**
### Property 14: Clustering merges nearby levels
*For any* set of S/R levels and a clustering tolerance T, after calling `cluster_sr_zones()`, no two distinct zones should have midpoints within T percent of each other. Equivalently, all input levels that are within T percent of each other must end up in the same zone.
**Validates: Requirements 7.2**
### Property 15: Zone strength is capped sum
*For any* SR zone produced by `cluster_sr_zones()`, the zone's strength should equal `min(100, sum(constituent_level_strengths))`.
**Validates: Requirements 7.3**
### Property 16: Zone type tagging
*For any* SR zone and current price, the zone's type should be `"support"` if the zone midpoint is less than the current price, and `"resistance"` otherwise.
**Validates: Requirements 7.4**
### Property 17: Zone filtering returns top N by strength
*For any* set of SR zones and a limit N, `cluster_sr_zones(..., max_zones=N)` should return at most N zones, and those zones should be the N zones with the highest strength scores from the full unfiltered set.
**Validates: Requirements 8.2**
## Error Handling
### Backend
| Scenario | Handling |
|----------|----------|
| OpenAI response has no `reasoning` field in JSON | Default to empty string `""` — no error |
| OpenAI response has no `url_citation` annotations | Return empty citations list — no error |
| OpenAI response JSON parse failure | Existing `ProviderError` handling unchanged |
| FMP endpoint returns 402 | Record in `unavailable_fields`, return `None` for that metric — no error |
| FMP profile endpoint fails | Existing `ProviderError` propagation unchanged |
| `citations_json` column contains invalid JSON | Catch `json.JSONDecodeError` in schema serialization, default to `[]` |
| `unavailable_fields_json` column contains invalid JSON | Catch `json.JSONDecodeError`, default to `{}` |
| `cluster_sr_zones()` receives empty levels list | Return empty zones list |
| `max_zones` is 0 or negative | Return empty zones list |
### Frontend
| Scenario | Handling |
|----------|----------|
| `reasoning` is empty string | Detail section shows "No reasoning available" placeholder |
| `citations` is empty array | Detail section omits citations subsection |
| `unavailable_fields` is empty object | All null metrics show "—" as before |
| Chart data has fewer than 10 bars | Disable zoom (show all bars, no zoom controls) |
| Wheel event fires rapidly | Debounce zoom recalculation to 1 frame via `requestAnimationFrame` |
| Zone `low` equals `high` (single-level zone) | Render as a thin line (minimum 2px height rectangle) |
## Testing Strategy
### Property-Based Testing
Library: **Hypothesis** (Python backend), **fast-check** (TypeScript frontend)
Each property test runs a minimum of 100 iterations. Each test is tagged with a comment referencing the design property:
```
# Feature: dashboard-enhancements, Property 14: Clustering merges nearby levels
```
Backend property tests (Hypothesis):
- **Property 1**: Generate random JSON strings with reasoning fields → verify extraction.
- **Property 2**: Generate mock OpenAI response objects with 010 annotations → verify citations list.
- **Property 3**: Generate random reasoning + citations → store → retrieve via test client → compare.
- **Property 6**: Generate random 402/200 combinations for 3 endpoints → verify unavailable_fields mapping.
- **Property 7**: Generate random unavailable_fields dicts → store → retrieve → compare.
- **Property 1012**: Generate random datasets (10500 bars) and zoom/pan sequences → verify range invariants.
- **Property 13**: Generate random chart dimensions and price ranges → verify coordinate mapping round-trips.
- **Property 14**: Generate random level sets (150 levels, prices 11000, strengths 1100) and tolerances (0.5%5%) → verify no two zones should have been merged.
- **Property 15**: Generate random level sets → cluster → verify each zone's strength = min(100, sum).
- **Property 16**: Generate random zones and current prices → verify type tagging.
- **Property 17**: Generate random zone sets and limits → verify top-N selection.
Frontend property tests (fast-check):
- **Property 4**: Generate random reasoning strings and citation lists → render SentimentPanel expanded → verify DOM content.
- **Property 5**: Generate random sentiment data → expand then collapse → verify summary visible, detail hidden.
- **Property 8**: Generate random FundamentalResponse with various null/non-null + reason combinations → verify displayed text.
- **Property 9**: Generate random FundamentalResponse → expand → verify source, timestamp, reasons in DOM.
### Unit Tests
Unit tests cover specific examples and edge cases:
- Sentiment provider with a real-shaped OpenAI response fixture (example for 1.1, 1.2).
- Sentiment provider with no annotations (edge case for 1.4).
- FMP provider with all-402 responses (edge case for 3.1).
- FMP provider with mixed 200/402 responses (example for 3.1).
- SentimentPanel default collapsed state (example for 2.5).
- FundamentalsPanel default collapsed state (example for 4.4).
- Chart with exactly 10 bars — zoom in should be blocked (edge case for 5.5).
- Chart with 1 bar — zoom disabled entirely (edge case).
- Crosshair removed on mouse leave (example for 6.5).
- `cluster_sr_zones()` with empty input (edge case).
- `cluster_sr_zones()` with all levels at the same price (edge case).
- `cluster_sr_zones()` with levels exactly at tolerance boundary (edge case).
- Default max_zones = 6 in the dashboard (example for 8.3).
- Zone with single constituent level (edge case — low == high).

View File

@@ -0,0 +1,124 @@
# Requirements Document
## Introduction
This specification covers four dashboard enhancements for the stock signal platform: sentiment drill-down with OpenAI response details, fundamentals drill-down with missing-data transparency, TradingView-style chart improvements (zoom, crosshair), and S/R level clustering into filterable shaded zones. All changes target the existing TickerDetailPage and its child components, preserving the glassmorphism UI style.
## Glossary
- **Dashboard**: The TickerDetailPage in the React frontend that displays ticker data, charts, scores, sentiment, and fundamentals.
- **Sentiment_Panel**: The SentimentPanel component that displays classification, confidence, dimension score, and source count for a ticker.
- **Fundamentals_Panel**: The FundamentalsPanel component that displays P/E Ratio, Revenue Growth, Earnings Surprise, and Market Cap for a ticker.
- **Chart_Component**: The CandlestickChart canvas-based component that renders OHLCV candlesticks and S/R level overlays.
- **SR_Service**: The backend service (sr_service.py) that detects, scores, merges, and tags support/resistance levels from OHLCV data.
- **Sentiment_Provider**: The OpenAISentimentProvider that calls the OpenAI Responses API with web_search_preview to produce sentiment classifications.
- **FMP_Provider**: The FMPFundamentalProvider that fetches fundamental data from Financial Modeling Prep stable endpoints.
- **SR_Zone**: A price range representing a cluster of nearby S/R levels, displayed as a shaded area on the chart instead of individual lines.
- **Detail_Section**: A collapsible/expandable UI region within a panel that reveals additional information on user interaction.
- **Data_Availability_Indicator**: A visual element within the Fundamentals_Panel that communicates which data fields are unavailable and the reason.
- **Crosshair**: A vertical and horizontal line overlay on the Chart_Component that tracks the cursor position and displays corresponding price and date values.
## Requirements
### Requirement 1: Sentiment Detail Storage
**User Story:** As a developer, I want the backend to store the full OpenAI response details (reasoning text, web search citations, and annotations) alongside the sentiment classification, so that the frontend can display drill-down information.
#### Acceptance Criteria
1. WHEN the Sentiment_Provider receives a response from the OpenAI Responses API, THE Sentiment_Provider SHALL extract and return the reasoning text from the parsed JSON response.
2. WHEN the Sentiment_Provider receives a response containing web_search_preview output items, THE Sentiment_Provider SHALL extract and return the list of source URLs and titles from the search result annotations.
3. THE sentiment API endpoint SHALL include the reasoning text and citations list in the response payload for each sentiment score.
4. IF the OpenAI response contains no annotations or citations, THEN THE Sentiment_Provider SHALL return an empty citations list without raising an error.
### Requirement 2: Sentiment Drill-Down UI
**User Story:** As a user, I want to drill into the sentiment analysis to see the AI reasoning and source citations, so that I can evaluate the quality of the sentiment classification.
#### Acceptance Criteria
1. THE Sentiment_Panel SHALL display a clickable expand/collapse toggle below the summary metrics.
2. WHEN the user expands the Detail_Section, THE Sentiment_Panel SHALL display the reasoning text from the latest sentiment score.
3. WHEN the user expands the Detail_Section and citations are available, THE Sentiment_Panel SHALL display each citation as a clickable link showing the source title and URL.
4. WHEN the user collapses the Detail_Section, THE Sentiment_Panel SHALL hide the reasoning and citations without removing the summary metrics.
5. THE Detail_Section SHALL default to the collapsed state on initial render.
### Requirement 3: Fundamentals Data Availability Transparency
**User Story:** As a user, I want to understand why certain fundamental metrics are missing for a ticker, so that I can distinguish between "data not available from provider" and "data not fetched."
#### Acceptance Criteria
1. WHEN the FMP_Provider receives an HTTP 402 response for a supplementary endpoint, THE FMP_Provider SHALL record the endpoint name and the reason "requires paid plan" in the response metadata.
2. THE fundamentals API endpoint SHALL include a field listing which data fields are unavailable and the corresponding reason for each.
3. WHEN a fundamental metric value is null and a corresponding unavailability reason exists, THE Fundamentals_Panel SHALL display the reason text (e.g., "Requires paid plan") instead of a dash character.
4. WHEN a fundamental metric value is null and no unavailability reason exists, THE Fundamentals_Panel SHALL display a dash character as the placeholder.
### Requirement 4: Fundamentals Drill-Down UI
**User Story:** As a user, I want to drill into the fundamentals data to see additional detail and data source information, so that I can better assess the fundamental metrics.
#### Acceptance Criteria
1. THE Fundamentals_Panel SHALL display a clickable expand/collapse toggle below the summary metrics.
2. WHEN the user expands the Detail_Section, THE Fundamentals_Panel SHALL display the data source name, the fetch timestamp, and any unavailability reasons for missing fields.
3. WHEN the user collapses the Detail_Section, THE Fundamentals_Panel SHALL hide the detail information without removing the summary metrics.
4. THE Detail_Section SHALL default to the collapsed state on initial render.
### Requirement 5: Chart Zoom Capability
**User Story:** As a user, I want to zoom in and out on the candlestick chart, so that I can examine specific time periods in detail or see the full price history.
#### Acceptance Criteria
1. WHEN the user scrolls the mouse wheel over the Chart_Component, THE Chart_Component SHALL zoom in or out by adjusting the visible date range.
2. WHEN the user zooms in, THE Chart_Component SHALL increase the candle width and reduce the number of visible bars proportionally.
3. WHEN the user zooms out, THE Chart_Component SHALL decrease the candle width and increase the number of visible bars proportionally.
4. WHEN the chart is zoomed in, THE Chart_Component SHALL allow the user to pan left and right by clicking and dragging horizontally.
5. THE Chart_Component SHALL constrain zoom limits so that the minimum visible range is 10 bars and the maximum visible range is the full dataset length.
6. THE Chart_Component SHALL re-render S/R overlays correctly at every zoom level.
### Requirement 6: Chart Crosshair
**User Story:** As a user, I want a crosshair overlay on the chart that tracks my cursor, so that I can precisely read price and date values at any point.
#### Acceptance Criteria
1. WHEN the user moves the cursor over the Chart_Component, THE Chart_Component SHALL draw a vertical line at the cursor x-position spanning the full chart height.
2. WHEN the user moves the cursor over the Chart_Component, THE Chart_Component SHALL draw a horizontal line at the cursor y-position spanning the full chart width.
3. THE Chart_Component SHALL display the corresponding price value as a label on the y-axis at the horizontal crosshair position.
4. THE Chart_Component SHALL display the corresponding date value as a label on the x-axis at the vertical crosshair position.
5. WHEN the cursor leaves the Chart_Component, THE Chart_Component SHALL remove the crosshair lines and labels.
### Requirement 7: S/R Level Clustering
**User Story:** As a user, I want nearby S/R levels to be clustered into zones, so that the chart is less cluttered and I can focus on the most significant price areas.
#### Acceptance Criteria
1. THE SR_Service SHALL accept a configurable clustering tolerance parameter that defines the maximum price distance (as a percentage) for grouping levels into a single SR_Zone.
2. WHEN two or more S/R levels fall within the clustering tolerance of each other, THE SR_Service SHALL merge those levels into a single SR_Zone with a price range (low bound, high bound) and an aggregated strength score.
3. THE SR_Service SHALL compute the aggregated strength of an SR_Zone as the sum of constituent level strengths, capped at 100.
4. THE SR_Service SHALL tag each SR_Zone as "support" or "resistance" based on the zone midpoint relative to the current price.
### Requirement 8: S/R Zone Filtering
**User Story:** As a user, I want to see only the strongest S/R zones on the chart, so that I can focus on the most significant price areas.
#### Acceptance Criteria
1. THE S/R API endpoint SHALL accept an optional parameter to limit the number of returned zones.
2. WHEN a zone limit is specified, THE SR_Service SHALL return only the zones with the highest aggregated strength scores, up to the specified limit.
3. THE Dashboard SHALL default to displaying a maximum of 6 SR_Zones on the chart.
### Requirement 9: S/R Zone Chart Rendering
**User Story:** As a user, I want S/R zones displayed as shaded areas on the chart instead of individual lines, so that I can visually identify price ranges of significance.
#### Acceptance Criteria
1. THE Chart_Component SHALL render each SR_Zone as a semi-transparent shaded rectangle spanning the zone price range (low bound to high bound) across the full chart width.
2. THE Chart_Component SHALL use green shading for support zones and red shading for resistance zones.
3. THE Chart_Component SHALL display a label for each SR_Zone showing the zone midpoint price and strength score.
4. THE Chart_Component SHALL render SR_Zones behind the candlestick bodies so that candles remain fully visible.
5. WHEN the chart is zoomed, THE Chart_Component SHALL re-render SR_Zones at the correct vertical positions for the current price scale.

View File

@@ -0,0 +1,219 @@
# Implementation Plan: Dashboard Enhancements
## Overview
Incremental implementation of four dashboard enhancements: sentiment drill-down, fundamentals drill-down, chart zoom/crosshair, and S/R zone clustering. Each feature area is built backend-first (model → service → schema → router) then frontend, with tests alongside implementation. All changes are additive to existing components.
## Tasks
- [x] 1. Sentiment drill-down — backend
- [x] 1.1 Add `reasoning` and `citations_json` columns to `SentimentScore` model and create Alembic migration
- Add `reasoning: Mapped[str] = mapped_column(Text, nullable=False, default="")` and `citations_json: Mapped[str] = mapped_column(Text, nullable=False, default="[]")` to `app/models/sentiment.py`
- Create Alembic migration with `server_default` so existing rows are backfilled
- _Requirements: 1.1, 1.2, 1.3_
- [x] 1.2 Update `OpenAISentimentProvider` to extract reasoning and citations from OpenAI response
- Add `reasoning` and `citations` fields to the `SentimentData` dataclass
- Extract `reasoning` from the parsed JSON response body
- Iterate `response.output` items for `url_citation` annotations, collect `{"url": ..., "title": ...}` dicts
- Return empty citations list when no annotations exist (no error)
- _Requirements: 1.1, 1.2, 1.4_
- [x] 1.3 Update `sentiment_service.store_sentiment()` to persist reasoning and citations
- Accept `reasoning` and `citations` parameters
- Serialize citations to JSON string before storing
- _Requirements: 1.3_
- [x] 1.4 Update sentiment schema and router to include reasoning and citations in API response
- Add `CitationItem` model and `reasoning`/`citations` fields to `SentimentScoreResult` in `app/schemas/sentiment.py`
- Deserialize `citations_json` when building the response, catch `JSONDecodeError` and default to `[]`
- _Requirements: 1.3_
- [ ]* 1.5 Write property tests for sentiment reasoning and citations extraction
- **Property 1: Sentiment reasoning extraction** — Generate random JSON with reasoning fields, verify extraction
- **Validates: Requirements 1.1**
- **Property 2: Sentiment citations extraction** — Generate mock OpenAI responses with 010 annotations, verify citations list
- **Validates: Requirements 1.2, 1.4**
- [ ]* 1.6 Write property test for sentiment data round-trip
- **Property 3: Sentiment data round-trip** — Generate random reasoning + citations, store, retrieve via test client, compare
- **Validates: Requirements 1.3**
- [x] 2. Sentiment drill-down — frontend
- [x] 2.1 Add `CitationItem`, `reasoning`, and `citations` fields to `SentimentScore` type in `frontend/src/lib/types.ts`
- _Requirements: 1.3, 2.2, 2.3_
- [x] 2.2 Add expandable detail section to `SentimentPanel`
- Add `useState<boolean>(false)` for expand/collapse toggle
- Render chevron toggle button below summary metrics
- When expanded: show reasoning text (or "No reasoning available" placeholder if empty) and citations as clickable `<a>` links
- When collapsed: hide reasoning and citations, keep summary metrics visible
- Default to collapsed state on initial render
- Preserve glassmorphism UI style
- _Requirements: 2.1, 2.2, 2.3, 2.4, 2.5_
- [ ]* 2.3 Write property tests for SentimentPanel drill-down
- **Property 4: Expanded sentiment detail displays all data** — Generate random reasoning and citations, render expanded, verify DOM content
- **Validates: Requirements 2.2, 2.3**
- **Property 5: Sentiment detail collapse hides content** — Expand then collapse, verify summary visible and detail hidden
- **Validates: Requirements 2.4**
- [x] 3. Checkpoint — Sentiment drill-down complete
- Ensure all tests pass, ask the user if questions arise.
- [x] 4. Fundamentals drill-down — backend
- [x] 4.1 Add `unavailable_fields_json` column to fundamental model and create Alembic migration
- Add `unavailable_fields_json: Mapped[str] = mapped_column(Text, nullable=False, default="{}")` to `app/models/fundamental.py`
- Add column to the same Alembic migration as sentiment columns (or a new one if migration 1.1 is already applied), with `server_default='{}'`
- _Requirements: 3.1, 3.2_
- [x] 4.2 Update `FMPFundamentalProvider` to record 402 reasons in `unavailable_fields`
- Add `unavailable_fields` field to `FundamentalData` dataclass
- In `_fetch_json_optional`, when HTTP 402 is received, map endpoint to field name: `ratios-ttm``pe_ratio`, `financial-growth``revenue_growth`, `earnings``earnings_surprise`
- Record `"requires paid plan"` as the reason for each affected field
- _Requirements: 3.1_
- [x] 4.3 Update `fundamental_service` to persist `unavailable_fields`
- Serialize `unavailable_fields` dict to JSON string before storing
- _Requirements: 3.2_
- [x] 4.4 Update fundamentals schema and router to include `unavailable_fields` in API response
- Add `unavailable_fields: dict[str, str] = {}` to `FundamentalResponse` in `app/schemas/fundamental.py`
- Deserialize `unavailable_fields_json` when building the response, catch `JSONDecodeError` and default to `{}`
- _Requirements: 3.2_
- [ ]* 4.5 Write property tests for FMP 402 reason recording and round-trip
- **Property 6: FMP 402 reason recording** — Generate random 402/200 combinations for 3 endpoints, verify unavailable_fields mapping
- **Validates: Requirements 3.1**
- **Property 7: Fundamentals unavailable_fields round-trip** — Generate random dicts, store, retrieve, compare
- **Validates: Requirements 3.2**
- [x] 5. Fundamentals drill-down — frontend
- [x] 5.1 Add `unavailable_fields` to `FundamentalResponse` type in `frontend/src/lib/types.ts`
- _Requirements: 3.2, 3.3_
- [x] 5.2 Update `FundamentalsPanel` to show unavailability reasons and expandable detail section
- When a metric is null and `unavailable_fields[field_name]` exists, display reason text in amber instead of "—"
- When a metric is null and no reason exists, display "—"
- Add expand/collapse toggle below summary metrics (default collapsed)
- When expanded: show data source name ("FMP"), fetch timestamp, and list of unavailable fields with reasons
- When collapsed: hide detail, keep summary metrics visible
- Preserve glassmorphism UI style
- _Requirements: 3.3, 3.4, 4.1, 4.2, 4.3, 4.4_
- [ ]* 5.3 Write property tests for FundamentalsPanel display logic
- **Property 8: Null field display depends on reason existence** — Generate random FundamentalResponse with various null/reason combos, verify displayed text
- **Validates: Requirements 3.3, 3.4**
- **Property 9: Fundamentals expanded detail content** — Generate random response, expand, verify source/timestamp/reasons in DOM
- **Validates: Requirements 4.2**
- [x] 6. Checkpoint — Fundamentals drill-down complete
- Ensure all tests pass, ask the user if questions arise.
- [x] 7. S/R zone clustering — backend
- [x] 7.1 Implement `cluster_sr_zones()` function in `app/services/sr_service.py`
- Sort levels by price ascending
- Greedy merge: walk sorted levels, merge if within tolerance % of current cluster midpoint
- Compute zone: low, high, midpoint, strength (sum capped at 100), level_count
- Tag zone type: "support" if midpoint < current_price, else "resistance"
- Sort by strength descending
- If `max_zones` set, return top N; if 0 or negative, return empty list
- Handle empty input by returning empty list
- _Requirements: 7.1, 7.2, 7.3, 7.4, 8.2_
- [x] 7.2 Add `SRZoneResult` schema and update `SRLevelResponse` in `app/schemas/sr_level.py`
- Add `SRZoneResult` model with `low`, `high`, `midpoint`, `strength`, `type`, `level_count`
- Add `zones: list[SRZoneResult] = []` to `SRLevelResponse`
- _Requirements: 7.2, 9.1_
- [x] 7.3 Update S/R router to accept `max_zones` parameter and return zones
- Add `max_zones: int = 6` query parameter to the S/R levels endpoint
- Call `cluster_sr_zones()` with fetched levels and current price
- Include zones in the response
- _Requirements: 8.1, 8.3_
- [ ]* 7.4 Write property tests for S/R zone clustering
- **Property 14: Clustering merges nearby levels** — Generate random level sets and tolerances, verify no two zones have midpoints within tolerance
- **Validates: Requirements 7.2**
- **Property 15: Zone strength is capped sum** — Generate random level sets, cluster, verify strength = min(100, sum)
- **Validates: Requirements 7.3**
- **Property 16: Zone type tagging** — Generate random zones and current prices, verify support/resistance tagging
- **Validates: Requirements 7.4**
- **Property 17: Zone filtering returns top N by strength** — Generate random zone sets and limits, verify top-N selection
- **Validates: Requirements 8.2**
- [x] 8. Checkpoint — S/R clustering backend complete
- Ensure all tests pass, ask the user if questions arise.
- [x] 9. Chart enhancements — zoom and pan
- [x] 9.1 Add `SRZone` and `SRLevelResponse.zones` types to `frontend/src/lib/types.ts`
- _Requirements: 9.1_
- [x] 9.2 Implement zoom (mouse wheel) on `CandlestickChart`
- Add `visibleRange: { start: number, end: number }` state initialized to full dataset
- Add `onWheel` handler: positive delta narrows range (zoom in), negative widens (zoom out)
- Clamp visible range to min 10 bars, max full dataset length
- Disable zoom if dataset has fewer than 10 bars
- Slice data by `visibleRange` for rendering
- Debounce zoom via `requestAnimationFrame`
- _Requirements: 5.1, 5.2, 5.3, 5.5_
- [x] 9.3 Implement pan (click-drag) on `CandlestickChart`
- Add `isPanning` and `panStartX` state
- `onMouseDown` starts pan, `onMouseMove` shifts visible range proportionally, `onMouseUp` ends pan
- Pan only active when zoomed in (visible range < full dataset)
- Clamp range to dataset bounds
- _Requirements: 5.4_
- [x] 9.4 Implement crosshair overlay on `CandlestickChart`
- Add `crosshair: { x: number, y: number } | null` state
- `onMouseMove` updates crosshair position
- Draw vertical line at cursor x spanning full chart height
- Draw horizontal line at cursor y spanning full chart width
- Display price label on y-axis at horizontal line position
- Display date label on x-axis at vertical line position
- `onMouseLeave` clears crosshair
- _Requirements: 6.1, 6.2, 6.3, 6.4, 6.5_
- [ ]* 9.5 Write property tests for chart zoom/pan invariants
- **Property 10: Zoom adjusts visible range proportionally** — Generate random datasets and wheel deltas, verify range narrows/widens
- **Validates: Requirements 5.1, 5.2, 5.3**
- **Property 11: Pan shifts visible range** — Generate random ranges and drag deltas, verify shift without width change
- **Validates: Requirements 5.4**
- **Property 12: Zoom range invariant** — Generate random zoom/pan sequences, verify range bounds always valid
- **Validates: Requirements 5.5**
- **Property 13: Coordinate-to-value mapping** — Generate random chart configs, verify yToPrice and xToBarIndex mappings
- **Validates: Requirements 6.3, 6.4**
- [x] 10. S/R zone rendering on chart
- [x] 10.1 Update `CandlestickChart` to accept `zones` prop and render shaded zone rectangles
- Accept `zones?: SRZone[]` prop
- Render each zone as a semi-transparent rectangle spanning low→high price range across full chart width
- Use green shading (rgba) for support zones, red shading for resistance zones
- Draw zones behind candlestick bodies (render zones first, then candles)
- Display label with midpoint price and strength score for each zone
- Re-render zones correctly at every zoom level using current price scale
- _Requirements: 9.1, 9.2, 9.3, 9.4, 9.5_
- [x] 10.2 Update `SROverlay` and `TickerDetailPage` to pass zones to `CandlestickChart`
- Update `useTickerDetail` hook or `SROverlay` to extract zones from the S/R API response
- Pass zones array to `CandlestickChart` component
- Default to max 6 zones (handled by backend `max_zones=6` default)
- _Requirements: 8.3, 9.1_
- [x] 10.3 Ensure S/R overlays re-render correctly at all zoom levels
- Verify zone rectangles reposition when zoom/pan changes the visible price scale
- Handle single-level zones (low == high) as thin 2px-height rectangles
- _Requirements: 5.6, 9.5_
- [x] 11. Final checkpoint — All features integrated
- Ensure all tests pass, ask the user if questions arise.
## Notes
- Tasks marked with `*` are optional and can be skipped for faster MVP
- Each task references specific requirements for traceability
- Checkpoints ensure incremental validation after each feature area
- Property tests validate universal correctness properties from the design document
- The Alembic migration for sentiment and fundamentals columns should ideally be a single migration file
- S/R zones are computed on-the-fly (not persisted), so no additional migration is needed for zones

View File

@@ -0,0 +1 @@
{"specId": "997fa90b-08bc-4b72-b099-ecc0ad611b06", "workflowType": "requirements-first", "specType": "bugfix"}

View File

@@ -0,0 +1,39 @@
# Bugfix Requirements Document
## Introduction
The R:R scanner's `scan_ticker` function selects trade setup targets by picking whichever S/R level yields the highest R:R ratio. Because R:R = reward / risk and risk is fixed (ATR-based stop), this always favors the most distant S/R level. The result is unrealistic trade setups targeting far-away levels that price is unlikely to reach. The scanner should instead select the highest-quality target by balancing R:R ratio with level strength and proximity to current price.
## Bug Analysis
### Current Behavior (Defect)
1.1 WHEN scanning for long setups THEN the system iterates all resistance levels above entry price and selects the one with the maximum R:R ratio, which is always the most distant level since risk is fixed
1.2 WHEN scanning for short setups THEN the system iterates all support levels below entry price and selects the one with the maximum R:R ratio, which is always the most distant level since risk is fixed
1.3 WHEN multiple S/R levels exist at varying distances with different strength values THEN the system ignores the `strength` field entirely and selects based solely on R:R magnitude
1.4 WHEN a weak, distant S/R level exists alongside a strong, nearby S/R level THEN the system selects the weak distant level because it produces a higher R:R ratio, resulting in an unrealistic trade setup
### Expected Behavior (Correct)
2.1 WHEN scanning for long setups THEN the system SHALL compute a quality score for each candidate resistance level that factors in R:R ratio, S/R level strength, and proximity to entry price, and select the level with the highest quality score
2.2 WHEN scanning for short setups THEN the system SHALL compute a quality score for each candidate support level that factors in R:R ratio, S/R level strength, and proximity to entry price, and select the level with the highest quality score
2.3 WHEN multiple S/R levels exist at varying distances with different strength values THEN the system SHALL weight stronger levels higher in the quality score, favoring targets that price is more likely to reach
2.4 WHEN a weak, distant S/R level exists alongside a strong, nearby S/R level THEN the system SHALL prefer the strong nearby level unless the distant level's combined quality score (considering its lower proximity and strength factors) still exceeds the nearby level's score
### Unchanged Behavior (Regression Prevention)
3.1 WHEN no S/R levels exist above entry price for longs (or below for shorts) THEN the system SHALL CONTINUE TO produce no setup for that direction
3.2 WHEN no candidate level meets the R:R threshold THEN the system SHALL CONTINUE TO produce no setup for that direction
3.3 WHEN only one S/R level exists in the target direction THEN the system SHALL CONTINUE TO evaluate it against the R:R threshold and produce a setup if it qualifies
3.4 WHEN scanning all tickers THEN the system SHALL CONTINUE TO process each ticker independently and persist results to the database
3.5 WHEN fetching stored trade setups THEN the system SHALL CONTINUE TO return them sorted by R:R ratio descending with composite score as secondary sort

View File

@@ -0,0 +1,209 @@
# R:R Scanner Target Quality Bugfix Design
## Overview
The `scan_ticker` function in `app/services/rr_scanner_service.py` selects trade setup targets by iterating candidate S/R levels and picking the one with the highest R:R ratio. Because risk is fixed (ATR × multiplier), R:R is a monotonically increasing function of distance from entry price. This means the scanner always selects the most distant S/R level, producing unrealistic trade setups.
The fix replaces the `max(rr)` selection with a quality score that balances three factors: R:R ratio, S/R level strength (0100), and proximity to current price. The quality score is computed as a weighted sum of normalized components, and the candidate with the highest quality score is selected as the target.
## Glossary
- **Bug_Condition (C)**: Multiple candidate S/R levels exist in the target direction, and the current code selects the most distant one purely because it has the highest R:R ratio, ignoring strength and proximity
- **Property (P)**: The scanner should select the candidate with the highest quality score (a weighted combination of R:R ratio, strength, and proximity) rather than the highest raw R:R ratio
- **Preservation**: All behavior for single-candidate scenarios, no-candidate scenarios, R:R threshold filtering, database persistence, and `get_trade_setups` sorting must remain unchanged
- **scan_ticker**: The function in `app/services/rr_scanner_service.py` that scans a single ticker for long and short trade setups
- **SRLevel.strength**: An integer 0100 representing how many times price has touched this level relative to total bars (computed by `sr_service._strength_from_touches`)
- **quality_score**: New scoring metric: `w_rr * norm_rr + w_strength * norm_strength + w_proximity * norm_proximity`
## Bug Details
### Fault Condition
The bug manifests when multiple S/R levels exist in the target direction (above entry for longs, below entry for shorts) and the scanner selects the most distant level because it has the highest R:R ratio, even though a closer, stronger level would be a more realistic target.
**Formal Specification:**
```
FUNCTION isBugCondition(input)
INPUT: input of type {entry_price, risk, candidate_levels: list[{price_level, strength}]}
OUTPUT: boolean
candidates := [lv for lv in candidate_levels where reward(lv) / risk >= rr_threshold]
IF len(candidates) < 2 THEN RETURN false
max_rr_level := argmax(candidates, key=lambda lv: reward(lv) / risk)
max_quality_level := argmax(candidates, key=lambda lv: quality_score(lv, entry_price, risk))
RETURN max_rr_level != max_quality_level
END FUNCTION
```
### Examples
- **Long, 2 resistance levels**: Entry=100, ATR-stop=97 (risk=3). Level A: price=103, strength=80 (R:R=1.0). Level B: price=115, strength=10 (R:R=5.0). Current code picks B (highest R:R). Expected: picks A (strong, nearby, realistic).
- **Long, 3 resistance levels**: Entry=50, risk=2. Level A: price=53, strength=90 (R:R=1.5). Level B: price=58, strength=40 (R:R=4.0). Level C: price=70, strength=5 (R:R=10.0). Current code picks C. Expected: picks A or B depending on quality weights.
- **Short, 2 support levels**: Entry=200, risk=5. Level A: price=192, strength=70 (R:R=1.6). Level B: price=170, strength=15 (R:R=6.0). Current code picks B. Expected: picks A.
- **Single candidate (no bug)**: Entry=100, risk=3. Only Level A: price=106, strength=50 (R:R=2.0). Both old and new code select A — no divergence.
## Expected Behavior
### Preservation Requirements
**Unchanged Behaviors:**
- When no S/R levels exist in the target direction, no setup is produced for that direction
- When no candidate level meets the R:R threshold, no setup is produced
- When only one S/R level exists in the target direction, it is evaluated against the R:R threshold and used if it qualifies
- `scan_all_tickers` processes each ticker independently; one failure does not stop others
- `get_trade_setups` returns results sorted by R:R ratio descending with composite score as secondary sort
- Database persistence: old setups are deleted and new ones inserted per ticker
- ATR computation, OHLCV fetching, and stop-loss calculation remain unchanged
- The TradeSetup model fields and their rounding (4 decimal places) remain unchanged
**Scope:**
All inputs where only zero or one candidate S/R levels exist in the target direction are completely unaffected by this fix. The fix only changes the selection logic when multiple qualifying candidates exist.
## Hypothesized Root Cause
Based on the bug description, the root cause is straightforward:
1. **Selection by max R:R only**: The inner loop in `scan_ticker` tracks `best_rr` and `best_target`, selecting whichever level produces the highest `rr = reward / risk`. Since `risk` is constant (ATR-based), `rr` is proportional to distance. The code has no mechanism to factor in `SRLevel.strength` or proximity.
2. **No quality scoring exists**: The `SRLevel.strength` field (0100) is available in the database and loaded by the query, but the selection loop never reads it. There is no quality score computation anywhere in the codebase.
3. **No proximity normalization**: Distance from entry is used only to compute reward, never as a penalty. Closer levels are always disadvantaged.
## Correctness Properties
Property 1: Fault Condition - Quality Score Selection Replaces Max R:R
_For any_ input where multiple candidate S/R levels exist in the target direction and meet the R:R threshold, the fixed `scan_ticker` function SHALL select the candidate with the highest quality score (weighted combination of normalized R:R, normalized strength, and normalized proximity) rather than the candidate with the highest raw R:R ratio.
**Validates: Requirements 2.1, 2.2, 2.3, 2.4**
Property 2: Preservation - Single/Zero Candidate Behavior Unchanged
_For any_ input where zero or one candidate S/R levels exist in the target direction, the fixed `scan_ticker` function SHALL produce the same result as the original function, preserving the existing filtering, persistence, and output behavior.
**Validates: Requirements 3.1, 3.2, 3.3, 3.4, 3.5**
## Fix Implementation
### Changes Required
Assuming our root cause analysis is correct:
**File**: `app/services/rr_scanner_service.py`
**Function**: `scan_ticker`
**Specific Changes**:
1. **Add `_compute_quality_score` helper function**: A new module-level function that computes the quality score for a candidate S/R level given entry price, risk, and configurable weights.
```python
def _compute_quality_score(
rr: float,
strength: int,
distance: float,
entry_price: float,
*,
w_rr: float = 0.35,
w_strength: float = 0.35,
w_proximity: float = 0.30,
rr_cap: float = 10.0,
) -> float:
norm_rr = min(rr / rr_cap, 1.0)
norm_strength = strength / 100.0
norm_proximity = 1.0 - min(distance / entry_price, 1.0)
return w_rr * norm_rr + w_strength * norm_strength + w_proximity * norm_proximity
```
- `norm_rr`: R:R capped at `rr_cap` (default 10) and divided to get 01 range
- `norm_strength`: Strength divided by 100 (already 0100 integer)
- `norm_proximity`: `1 - (distance / entry_price)`, so closer levels score higher
- Default weights: 0.35 R:R, 0.35 strength, 0.30 proximity (sum = 1.0)
2. **Replace long setup selection loop**: Instead of tracking `best_rr` / `best_target`, iterate candidates, compute quality score for each, and track `best_quality` / `best_candidate`. Still filter by `rr >= rr_threshold` before scoring. Store the selected level's R:R in the TradeSetup (not the quality score — R:R remains the reported metric).
3. **Replace short setup selection loop**: Same change as longs but for levels below entry.
4. **Pass `SRLevel` object through selection**: The loop already has access to `lv.strength` from the query. No additional DB queries needed.
5. **No changes to `get_trade_setups`**: Sorting by `rr_ratio` descending remains. The `rr_ratio` stored in TradeSetup is the actual R:R of the selected level, not the quality score.
## Testing Strategy
### Validation Approach
The testing strategy follows a two-phase approach: first, surface counterexamples that demonstrate the bug on unfixed code, then verify the fix works correctly and preserves existing behavior.
### Exploratory Fault Condition Checking
**Goal**: Surface counterexamples that demonstrate the bug BEFORE implementing the fix. Confirm or refute the root cause analysis. If we refute, we will need to re-hypothesize.
**Test Plan**: Create mock scenarios with multiple S/R levels of varying strength and distance. Run `scan_ticker` on unfixed code and assert that the selected target is NOT the most distant level. These tests will fail on unfixed code, confirming the bug.
**Test Cases**:
1. **Long with strong-near vs weak-far**: Entry=100, risk=3. Near level (103, strength=80) vs far level (115, strength=10). Assert selected target != 115 (will fail on unfixed code)
2. **Short with strong-near vs weak-far**: Entry=200, risk=5. Near level (192, strength=70) vs far level (170, strength=15). Assert selected target != 170 (will fail on unfixed code)
3. **Three candidates with varying profiles**: Entry=50, risk=2. Three levels at different distances/strengths. Assert selection is not purely distance-based (will fail on unfixed code)
**Expected Counterexamples**:
- The unfixed code always selects the most distant level regardless of strength
- Root cause confirmed: selection loop only tracks `best_rr` which is proportional to distance
### Fix Checking
**Goal**: Verify that for all inputs where the bug condition holds, the fixed function produces the expected behavior.
**Pseudocode:**
```
FOR ALL input WHERE isBugCondition(input) DO
result := scan_ticker_fixed(input)
selected_level := result.target
ASSERT selected_level == argmax(candidates, key=quality_score)
ASSERT quality_score(selected_level) >= quality_score(any_other_candidate)
END FOR
```
### Preservation Checking
**Goal**: Verify that for all inputs where the bug condition does NOT hold, the fixed function produces the same result as the original function.
**Pseudocode:**
```
FOR ALL input WHERE NOT isBugCondition(input) DO
ASSERT scan_ticker_original(input) == scan_ticker_fixed(input)
END FOR
```
**Testing Approach**: Property-based testing is recommended for preservation checking because:
- It generates many test cases automatically across the input domain
- It catches edge cases that manual unit tests might miss
- It provides strong guarantees that behavior is unchanged for all non-buggy inputs
**Test Plan**: Observe behavior on UNFIXED code first for zero-candidate and single-candidate scenarios, then write property-based tests capturing that behavior.
**Test Cases**:
1. **Zero candidates preservation**: Generate random tickers with no S/R levels in target direction. Verify no setup is produced (same as original).
2. **Single candidate preservation**: Generate random tickers with exactly one qualifying S/R level. Verify same setup is produced as original.
3. **Below-threshold preservation**: Generate random tickers where all candidates have R:R below threshold. Verify no setup is produced.
4. **Database persistence preservation**: Verify old setups are deleted and new ones inserted identically.
### Unit Tests
- Test `_compute_quality_score` with known inputs and verify output matches expected formula
- Test that quality score components are properly normalized to 01 range
- Test that `rr_cap` correctly caps the R:R normalization
- Test edge cases: strength=0, strength=100, distance=0, single candidate
### Property-Based Tests
- Generate random sets of S/R levels with varying strengths and distances; verify the selected target always has the highest quality score among candidates
- Generate random single-candidate scenarios; verify output matches what the original function would produce
- Generate random inputs with all candidates below R:R threshold; verify no setup is produced
### Integration Tests
- Test full `scan_ticker` flow with mocked DB containing multiple S/R levels of varying quality
- Test `scan_all_tickers` still processes each ticker independently
- Test that `get_trade_setups` returns correct sorting after fix

View File

@@ -0,0 +1,35 @@
# Tasks
## 1. Add quality score helper function
- [x] 1.1 Create `_compute_quality_score(rr, strength, distance, entry_price, *, w_rr=0.35, w_strength=0.35, w_proximity=0.30, rr_cap=10.0) -> float` function in `app/services/rr_scanner_service.py` that computes a weighted sum of normalized R:R, normalized strength, and normalized proximity
- [x] 1.2 Implement normalization: `norm_rr = min(rr / rr_cap, 1.0)`, `norm_strength = strength / 100.0`, `norm_proximity = 1.0 - min(distance / entry_price, 1.0)`
- [x] 1.3 Return `w_rr * norm_rr + w_strength * norm_strength + w_proximity * norm_proximity`
## 2. Replace long setup selection logic
- [x] 2.1 In `scan_ticker`, replace the long setup loop that tracks `best_rr` / `best_target` with a loop that computes `quality_score` for each candidate via `_compute_quality_score` and tracks `best_quality` / `best_candidate_rr` / `best_candidate_target`
- [x] 2.2 Keep the `rr >= rr_threshold` filter — only candidates meeting the threshold are scored
- [x] 2.3 Store the selected candidate's actual R:R ratio (not the quality score) in `TradeSetup.rr_ratio`
## 3. Replace short setup selection logic
- [x] 3.1 Apply the same quality-score selection change to the short setup loop, mirroring the long setup changes
- [x] 3.2 Ensure distance is computed as `entry_price - lv.price_level` for short candidates
## 4. Write unit tests for `_compute_quality_score`
- [x] 4.1 Create `tests/unit/test_rr_scanner_quality_score.py` with tests for known inputs verifying the formula output
- [x] 4.2 Test edge cases: strength=0, strength=100, distance=0, rr at cap, rr above cap
- [x] 4.3 Test that all normalized components stay in 01 range
## 5. Write exploratory bug-condition tests (run on unfixed code to confirm bug)
- [x] 5.1 [PBT-exploration] Create `tests/unit/test_rr_scanner_bug_exploration.py` with a property test that generates multiple S/R levels with varying strengths and distances, calls `scan_ticker`, and asserts the selected target is NOT always the most distant level — expected to FAIL on unfixed code, confirming the bug
## 6. Write fix-checking tests
- [x] 6.1 [PBT-fix] Create `tests/unit/test_rr_scanner_fix_check.py` with a property test that generates multiple candidate S/R levels meeting the R:R threshold, calls `scan_ticker` on fixed code, and asserts the selected target has the highest quality score among all candidates
## 7. Write preservation tests
- [x] 7.1 [PBT-preservation] Create `tests/unit/test_rr_scanner_preservation.py` with a property test that generates zero-candidate and single-candidate scenarios and asserts the fixed function produces the same output as the original (no setup for zero candidates, same setup for single candidate)
- [x] 7.2 Add unit test verifying that when no S/R levels exist, no setup is produced (unchanged)
- [x] 7.3 Add unit test verifying that when only one candidate meets threshold, it is selected (unchanged)
- [x] 7.4 Add unit test verifying `get_trade_setups` sorting is unchanged (R:R desc, composite desc)
## 8. Integration test
- [x] 8.1 Add integration test in `tests/unit/test_rr_scanner_integration.py` that mocks DB with multiple S/R levels of varying quality, runs `scan_ticker`, and verifies the full flow: quality-based selection, correct TradeSetup fields, database persistence

View File

@@ -0,0 +1 @@
{"specId": "9b39d94f-51e1-42d3-bacc-68eb3961f2b1", "workflowType": "requirements-first", "specType": "feature"}

View File

@@ -0,0 +1,351 @@
# Design Document: Score Transparency & Trade Overlay
## Overview
This feature extends the stock signal platform in two areas:
1. **Score Transparency** — The scoring API and UI are enhanced to expose the full breakdown of how each dimension score and the composite score are calculated. Each dimension returns its sub-scores, raw input values, weights, and formula descriptions. The frontend renders expandable panels showing this detail.
2. **Trade Setup Chart Overlay** — When a trade setup exists for a ticker (from the R:R scanner), the candlestick chart renders colored zones for entry, stop-loss, and take-profit levels. The ticker detail page fetches trade data and passes it to the chart.
Both features are additive — they extend existing API responses and UI components without breaking current behavior.
## Architecture
The changes follow the existing layered architecture:
```
┌─────────────────────────────────────────────────────┐
│ Frontend (React) │
│ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
│ │ ScoreCard │ │ Dimension │ │Candlestick│ │
│ │ (composite │ │ Panel │ │Chart │ │
│ │ weights) │ │ (breakdowns) │ │(trade │ │
│ └──────┬───────┘ └──────┬───────┘ │ overlay) │ │
│ │ │ └─────┬─────┘ │
│ ┌──────┴─────────────────┴────────────────┴─────┐ │
│ │ useTickerDetail + useTrades │ │
│ └──────────────────┬────────────────────────────┘ │
└─────────────────────┼───────────────────────────────┘
│ HTTP
┌─────────────────────┼───────────────────────────────┐
│ Backend (FastAPI) │
│ ┌──────────────────┴────────────────────────────┐ │
│ │ scores router │ │
│ │ GET /api/v1/scores/{symbol} │ │
│ │ (extended response with breakdowns) │ │
│ └──────────────────┬────────────────────────────┘ │
│ ┌──────────────────┴────────────────────────────┐ │
│ │ scoring_service.py │ │
│ │ _compute_*_score → returns ScoreBreakdown │ │
│ │ get_score → assembles full breakdown response │ │
│ └───────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
```
Key design decisions:
- **Backend-driven breakdowns**: Each `_compute_*_score` function is refactored to return a `ScoreBreakdown` dict alongside the numeric score, rather than computing breakdowns separately. This ensures the breakdown always matches the actual score.
- **Single API call**: The existing `GET /api/v1/scores/{symbol}` endpoint is extended (not a new endpoint) to include breakdowns in the response. This avoids extra round-trips.
- **Trade overlay via props**: The `CandlestickChart` component receives an optional `tradeSetup` prop. The chart draws overlay elements using the existing canvas rendering pipeline — no new library needed.
- **Trade data reuse**: The frontend reuses the existing `useTrades` hook and trades API. The `TickerDetailPage` filters for the current symbol client-side.
## Components and Interfaces
### Backend
#### Modified: `scoring_service.py` — Dimension compute functions
Each `_compute_*_score` function changes from returning `float | None` to returning a tuple `(float | None, ScoreBreakdown | None)` where `ScoreBreakdown` is a typed dict:
```python
class SubScoreDetail(TypedDict):
name: str
score: float
weight: float
raw_value: float | str | None
description: str
class ScoreBreakdown(TypedDict):
sub_scores: list[SubScoreDetail]
formula: str
unavailable: list[dict[str, str]] # [{"name": ..., "reason": ...}]
```
- `_compute_technical_score` → returns sub-scores for ADX (0.4), EMA (0.3), RSI (0.3) with raw indicator values
- `_compute_sentiment_score` → returns record count, decay rate, lookback window, weighted average formula
- `_compute_fundamental_score` → returns PE Ratio, Revenue Growth, Earnings Surprise sub-scores with raw values
- `_compute_momentum_score` → returns 5-day ROC (0.5), 20-day ROC (0.5) with raw percentages
- `_compute_sr_quality_score` → returns strong count (max 40), proximity (max 30), avg strength (max 30) with inputs
#### Modified: `scoring_service.py` — `get_score`
Assembles the full response including breakdowns per dimension and composite weight info (available vs missing dimensions, re-normalized weights).
#### Modified: `app/schemas/score.py`
New Pydantic models:
```python
class SubScoreResponse(BaseModel):
name: str
score: float
weight: float
raw_value: float | str | None = None
description: str = ""
class ScoreBreakdownResponse(BaseModel):
sub_scores: list[SubScoreResponse]
formula: str
unavailable: list[dict[str, str]] = []
class DimensionScoreResponse(BaseModel): # extended
dimension: str
score: float
is_stale: bool
computed_at: datetime | None = None
breakdown: ScoreBreakdownResponse | None = None # NEW
class CompositeBreakdownResponse(BaseModel):
weights: dict[str, float]
available_dimensions: list[str]
missing_dimensions: list[str]
renormalized_weights: dict[str, float]
formula: str
class ScoreResponse(BaseModel): # extended
# ... existing fields ...
composite_breakdown: CompositeBreakdownResponse | None = None # NEW
```
#### Modified: `app/routers/scores.py`
The `read_score` endpoint populates the new breakdown fields from the service response.
### Frontend
#### Modified: `frontend/src/lib/types.ts`
New TypeScript types:
```typescript
interface SubScore {
name: string;
score: number;
weight: number;
raw_value: number | string | null;
description: string;
}
interface ScoreBreakdown {
sub_scores: SubScore[];
formula: string;
unavailable: { name: string; reason: string }[];
}
interface CompositeBreakdown {
weights: Record<string, number>;
available_dimensions: string[];
missing_dimensions: string[];
renormalized_weights: Record<string, number>;
formula: string;
}
```
Extended existing types:
- `DimensionScoreDetail` gains `breakdown?: ScoreBreakdown`
- `ScoreResponse` gains `composite_breakdown?: CompositeBreakdown`
#### New: `frontend/src/components/ticker/DimensionBreakdownPanel.tsx`
An expandable panel component that renders inside the ScoreCard for each dimension. Shows:
- Chevron toggle for expand/collapse
- Sub-score rows: name, bar visualization, score value, weight badge, raw input value
- Formula description text
- Muted "unavailable" labels for missing sub-scores
#### Modified: `frontend/src/components/ui/ScoreCard.tsx`
- Each dimension row becomes clickable/expandable, rendering `DimensionBreakdownPanel` when expanded
- Composite score section shows dimension weights next to each bar
- Missing dimensions shown with muted styling and "redistributed" indicator
- Tooltip/inline text explaining weighted average with re-normalization
#### Modified: `frontend/src/components/charts/CandlestickChart.tsx`
New optional prop: `tradeSetup?: TradeSetup`
When provided, the chart draws:
- Entry price: dashed horizontal line (blue/white) spanning full width
- Stop-loss zone: red semi-transparent rectangle between entry and stop-loss
- Take-profit zone: green semi-transparent rectangle between entry and target
- Price labels on y-axis for entry, stop, target
- All three price levels included in y-axis range calculation
- Hover tooltip showing direction, entry, stop, target, R:R ratio
#### Modified: `frontend/src/pages/TickerDetailPage.tsx`
- Calls `useTrades()` to fetch all trade setups
- Filters for current symbol, picks latest by `detected_at`
- Passes `tradeSetup` prop to `CandlestickChart`
- Renders a trade setup summary card below the chart when a setup exists
- Handles trades API failure gracefully (chart renders without overlay, error logged)
#### Modified: `frontend/src/hooks/useTickerDetail.ts`
Adds trades query to the hook return value so the page has access to trade data.
## Data Models
### Backend Schema Changes
No new database tables. The breakdown data is computed on-the-fly from existing data and returned in the API response only.
### API Response Shape (extended `GET /api/v1/scores/{symbol}`)
```json
{
"status": "success",
"data": {
"symbol": "AAPL",
"composite_score": 72.5,
"composite_stale": false,
"weights": { "technical": 0.25, "sr_quality": 0.20, "sentiment": 0.15, "fundamental": 0.20, "momentum": 0.20 },
"composite_breakdown": {
"weights": { "technical": 0.25, "sr_quality": 0.20, "sentiment": 0.15, "fundamental": 0.20, "momentum": 0.20 },
"available_dimensions": ["technical", "sr_quality", "fundamental", "momentum"],
"missing_dimensions": ["sentiment"],
"renormalized_weights": { "technical": 0.294, "sr_quality": 0.235, "fundamental": 0.235, "momentum": 0.235 },
"formula": "Weighted average of available dimensions with re-normalized weights: sum(weight_i * score_i) / sum(weight_i)"
},
"dimensions": [
{
"dimension": "technical",
"score": 68.2,
"is_stale": false,
"computed_at": "2024-01-15T10:30:00Z",
"breakdown": {
"sub_scores": [
{ "name": "ADX", "score": 72.0, "weight": 0.4, "raw_value": 72.0, "description": "ADX value (0-100). Higher = stronger trend." },
{ "name": "EMA", "score": 65.0, "weight": 0.3, "raw_value": 1.5, "description": "Price 1.5% above EMA(20). Score: 50 + pct_diff * 10." },
{ "name": "RSI", "score": 62.0, "weight": 0.3, "raw_value": 62.0, "description": "RSI(14) value. Score equals RSI." }
],
"formula": "Weighted average: 0.4*ADX + 0.3*EMA + 0.3*RSI, re-normalized if any sub-score unavailable.",
"unavailable": []
}
}
],
"missing_dimensions": ["sentiment"],
"computed_at": "2024-01-15T10:30:00Z"
}
}
```
### Trade Setup Data (existing, no changes)
The `TradeSetup` type already exists in `frontend/src/lib/types.ts` with all needed fields: `symbol`, `direction`, `entry_price`, `stop_loss`, `target`, `rr_ratio`, `detected_at`.
## Correctness Properties
*A property is a characteristic or behavior that should hold true across all valid executions of a system — essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.*
### Property 1: Dimension breakdown contains correct sub-scores
*For any* dimension type (technical, sentiment, fundamental, momentum, sr_quality) and any valid input data sufficient to compute that dimension, the returned `ScoreBreakdown` shall contain exactly the expected sub-score names with the correct weights for that dimension type, and each sub-score's `raw_value` shall be non-null.
Specifically:
- technical → ADX (0.4), EMA (0.3), RSI (0.3)
- sentiment → record_count, decay_rate, lookback_window as sub-score metadata
- fundamental → PE Ratio, Revenue Growth, Earnings Surprise (equal weight)
- momentum → 5-day ROC (0.5), 20-day ROC (0.5)
- sr_quality → Strong Count (max 40), Proximity (max 30), Avg Strength (max 30)
**Validates: Requirements 1.1, 1.2, 1.3, 1.4, 1.5, 1.6**
### Property 2: Composite re-normalization correctness
*For any* set of dimension scores where at least one dimension is available and zero or more are missing, the composite breakdown shall:
- List exactly the available dimensions in `available_dimensions`
- List exactly the missing dimensions in `missing_dimensions`
- Have `renormalized_weights` that sum to 1.0 (within floating-point tolerance)
- Have each renormalized weight equal to `original_weight / sum(available_original_weights)`
**Validates: Requirements 1.7, 3.2**
### Property 3: Dimension breakdown UI rendering completeness
*For any* `ScoreBreakdown` object with N sub-scores, the `DimensionBreakdownPanel` component shall render exactly N sub-score rows, each containing the sub-score name, numeric score value, weight, and raw input value.
**Validates: Requirements 2.1**
### Property 4: Composite weight display
*For any* score response with K dimensions (some available, some missing), the `ScoreCard` component shall render the weight value next to each dimension bar, and missing dimensions shall be rendered with a visually distinct (muted/dimmed) style.
**Validates: Requirements 3.1, 3.2**
### Property 5: Trade overlay y-axis range includes all trade levels
*For any* OHLCV dataset and any `TradeSetup` (with entry_price, stop_loss, target), the chart's computed y-axis range `[lo, hi]` shall satisfy: `lo <= min(entry_price, stop_loss, target)` and `hi >= max(entry_price, stop_loss, target)`.
**Validates: Requirements 4.4**
### Property 6: Trade setup selection picks latest matching symbol
*For any* non-empty list of `TradeSetup` objects and any symbol string, filtering for that symbol and selecting by latest `detected_at` shall return the setup with the maximum `detected_at` among all setups matching that symbol. If no setups match, the result shall be null/undefined.
**Validates: Requirements 5.1, 5.5**
## Error Handling
| Scenario | Behavior |
|---|---|
| Dimension computation fails (insufficient data) | Score returns `None`, breakdown includes unavailable sub-scores with reason strings (Req 1.8) |
| Individual sub-score fails (e.g., ADX needs 28 bars but only 20 available) | Sub-score omitted from breakdown, added to `unavailable` list with reason. Remaining sub-scores re-normalized (Req 1.8) |
| Trades API request fails on TickerDetailPage | Chart renders without trade overlay. Error logged to console. Page remains functional (Req 5.4) |
| No trade setup exists for current symbol | Chart renders normally without any overlay elements (Req 4.6) |
| Score breakdown data is null (stale or never computed) | DimensionPanel shows score without expandable breakdown section |
| Composite has zero available dimensions | `composite_score` is `null`, `composite_breakdown` shows all dimensions as missing |
## Testing Strategy
### Unit Tests
Unit tests cover specific examples and edge cases:
- **Backend**: Test each `_compute_*_score` function returns correct breakdown structure for known input data. Test edge cases: missing sub-scores, all sub-scores missing, single sub-score available.
- **Frontend components**: Test `DimensionBreakdownPanel` renders correctly for each dimension type with known breakdown data. Test expand/collapse behavior. Test unavailable sub-score rendering.
- **Trade overlay**: Test `CandlestickChart` draws overlay elements when `tradeSetup` prop is provided. Test no overlay when prop is absent. Test tooltip content on hover.
- **Trade setup selection**: Test filtering and latest-selection logic with specific examples including edge cases (no matches, single match, multiple matches with same timestamp).
- **Composite display**: Test `ScoreCard` renders weights, missing dimension indicators, and re-normalization explanation.
### Property-Based Tests
Property-based tests use `hypothesis` (Python backend) and `fast-check` (TypeScript frontend) with minimum 100 iterations per property.
Each property test references its design document property:
- **Property 1** — `Feature: score-transparency-trade-overlay, Property 1: Dimension breakdown contains correct sub-scores`
Generate random valid indicator data for each dimension type, compute the score, and verify the breakdown structure matches the expected sub-score names and weights.
- **Property 2** — `Feature: score-transparency-trade-overlay, Property 2: Composite re-normalization correctness`
Generate random subsets of 5 dimensions (1-5 available), assign random weights, compute re-normalized weights, and verify they sum to 1.0 and each equals `original / sum(available)`.
- **Property 3** — `Feature: score-transparency-trade-overlay, Property 3: Dimension breakdown UI rendering completeness`
Generate random `ScoreBreakdown` objects with 1-5 sub-scores, render `DimensionBreakdownPanel`, and verify the DOM contains exactly N sub-score rows with all required fields.
- **Property 4** — `Feature: score-transparency-trade-overlay, Property 4: Composite weight display`
Generate random score responses with random available/missing dimension combinations, render `ScoreCard`, and verify weight labels are present and missing dimensions are visually distinct.
- **Property 5** — `Feature: score-transparency-trade-overlay, Property 5: Trade overlay y-axis range includes all trade levels`
Generate random OHLCV data and random trade setups, compute the chart y-axis range, and verify all three trade levels fall within `[lo, hi]`.
- **Property 6** — `Feature: score-transparency-trade-overlay, Property 6: Trade setup selection picks latest matching symbol`
Generate random lists of trade setups with random symbols and timestamps, apply the selection logic, and verify the result is the latest setup for the target symbol.
### Test Configuration
- Python property tests: `hypothesis` library, `@settings(max_examples=100)`
- TypeScript property tests: `fast-check` library, `fc.assert(property, { numRuns: 100 })`
- Each property test tagged with a comment: `Feature: score-transparency-trade-overlay, Property N: <title>`
- Each correctness property implemented by a single property-based test

View File

@@ -0,0 +1,87 @@
# Requirements Document
## Introduction
This feature adds two capabilities to the stock trading signal platform:
1. **Score Transparency** — Each dimension score (sentiment, fundamental, momentum, technical, sr_quality) and the composite score currently appear as opaque numbers. This feature exposes the scoring formulas, sub-scores, weights, and input values so users can understand exactly how each score was calculated.
2. **Trade Setup Chart Overlay** — When a trade setup exists for a ticker (from the R:R scanner), the candlestick chart on the ticker detail page renders visual overlays showing the entry price, stop-loss zone, and take-profit zone as colored regions, similar to TradingView trade visualization.
## Glossary
- **Scoring_API**: The backend API endpoint (`GET /api/v1/scores/{symbol}`) that returns composite and dimension scores for a ticker
- **Score_Breakdown**: A structured object containing the sub-scores, input values, weights, and formula description for a single dimension score
- **Dimension_Panel**: A frontend UI component that displays a single dimension score along with its Score_Breakdown details
- **ScoreCard_Component**: The frontend component (`ScoreCard.tsx`) that displays the composite score ring and dimension bar list
- **CandlestickChart_Component**: The frontend canvas-based chart component (`CandlestickChart.tsx`) that renders OHLCV data with overlays
- **Trade_Overlay**: A set of visual elements drawn on the CandlestickChart_Component representing a trade setup's entry, stop-loss, and target levels
- **TradeSetup**: A data object with fields: symbol, direction, entry_price, stop_loss, target, rr_ratio, representing a detected trade opportunity
- **TickerDetail_Page**: The page (`TickerDetailPage.tsx`) that displays all data for a single ticker
## Requirements
### Requirement 1: Score Breakdown API Response
**User Story:** As a trader, I want the scores API to return detailed breakdowns for each dimension score, so that the frontend can display how scores were calculated.
#### Acceptance Criteria
1. WHEN a score request is made for a symbol, THE Scoring_API SHALL return a Score_Breakdown object for each dimension containing: sub-score names, sub-score values, input values used, weights applied, and the formula description
2. WHEN the technical dimension is computed, THE Scoring_API SHALL include sub-scores for ADX (weight 0.4), EMA (weight 0.3), and RSI (weight 0.3), along with the raw indicator values (adx_value, ema_value, latest_close, rsi_value)
3. WHEN the sentiment dimension is computed, THE Scoring_API SHALL include the number of sentiment records used, the decay rate, the lookback window, and the time-decay weighted average formula parameters
4. WHEN the fundamental dimension is computed, THE Scoring_API SHALL include sub-scores for PE Ratio, Revenue Growth, and Earnings Surprise, along with the raw metric values and the normalization formula for each
5. WHEN the momentum dimension is computed, THE Scoring_API SHALL include sub-scores for 5-day ROC (weight 0.5) and 20-day ROC (weight 0.5), along with the raw ROC percentage values
6. WHEN the sr_quality dimension is computed, THE Scoring_API SHALL include sub-scores for strong level count (max 40 pts), proximity (max 30 pts), and average strength (max 30 pts), along with the input values (strong_count, nearest_distance_pct, avg_strength)
7. WHEN the composite score is computed, THE Scoring_API SHALL include the per-dimension weights used and indicate which dimensions were available versus missing for the re-normalization calculation
8. IF a sub-score component has insufficient data, THEN THE Scoring_API SHALL omit that sub-score from the breakdown and include a reason string explaining the data gap
### Requirement 2: Score Transparency UI — Dimension Panels
**User Story:** As a trader, I want to see a detailed breakdown of each dimension score in the UI, so that I can understand what drives each score.
#### Acceptance Criteria
1. WHEN a dimension score is displayed in the ScoreCard_Component, THE Dimension_Panel SHALL show an expandable breakdown section listing each sub-score name, its value, its weight, and the raw input value
2. WHEN the user expands a dimension breakdown, THE Dimension_Panel SHALL display the formula description as human-readable text explaining how the sub-scores combine into the dimension score
3. WHEN the technical dimension breakdown is expanded, THE Dimension_Panel SHALL display ADX score and raw ADX value, EMA score and percentage difference from EMA, and RSI score and raw RSI value, each with their respective weights (40%, 30%, 30%)
4. WHEN the sentiment dimension breakdown is expanded, THE Dimension_Panel SHALL display the number of sentiment records, the decay rate, and the weighted average calculation summary
5. WHEN the fundamental dimension breakdown is expanded, THE Dimension_Panel SHALL display PE Ratio sub-score with raw PE value, Revenue Growth sub-score with raw growth percentage, and Earnings Surprise sub-score with raw surprise percentage
6. WHEN the momentum dimension breakdown is expanded, THE Dimension_Panel SHALL display 5-day ROC sub-score with raw ROC percentage and 20-day ROC sub-score with raw ROC percentage
7. WHEN the sr_quality dimension breakdown is expanded, THE Dimension_Panel SHALL display strong level count score, proximity score, and average strength score with their respective input values
8. IF a sub-score is unavailable due to insufficient data, THEN THE Dimension_Panel SHALL display a muted label indicating the sub-score is unavailable with the reason
### Requirement 3: Composite Score Transparency
**User Story:** As a trader, I want to see how the composite score is calculated from dimension scores, so that I can understand the overall signal strength.
#### Acceptance Criteria
1. WHEN the composite score is displayed, THE ScoreCard_Component SHALL show the weight assigned to each dimension next to its bar in the dimensions list
2. WHEN a dimension is missing from the composite calculation, THE ScoreCard_Component SHALL visually indicate the missing dimension and show that its weight was redistributed
3. WHEN the user views the composite score section, THE ScoreCard_Component SHALL display a tooltip or inline text explaining that the composite is a weighted average of available dimensions with re-normalized weights
### Requirement 4: Trade Setup Chart Overlay
**User Story:** As a trader, I want to see my trade setup (entry, stop-loss, target) overlaid on the candlestick chart, so that I can visually assess the trade relative to price action.
#### Acceptance Criteria
1. WHEN a TradeSetup exists for the current ticker, THE CandlestickChart_Component SHALL render an entry price line as a dashed horizontal line in blue or white color spanning the full chart width
2. WHEN a TradeSetup exists for the current ticker, THE CandlestickChart_Component SHALL render a stop-loss zone as a red semi-transparent shaded rectangle between the entry price and the stop-loss price level
3. WHEN a TradeSetup exists for the current ticker, THE CandlestickChart_Component SHALL render a take-profit zone as a green semi-transparent shaded rectangle between the entry price and the target price level
4. THE CandlestickChart_Component SHALL include the entry price, stop-loss price, and target price in the y-axis price range calculation so that all trade overlay levels are visible within the chart viewport
5. WHEN the user hovers over the trade overlay area, THE CandlestickChart_Component SHALL display a tooltip showing the trade direction, entry price, stop-loss price, target price, and R:R ratio
6. IF no TradeSetup exists for the current ticker, THEN THE CandlestickChart_Component SHALL render the chart without any trade overlay elements
### Requirement 5: Trade Setup Data Integration on Ticker Detail Page
**User Story:** As a trader, I want the ticker detail page to automatically fetch and display trade setups for the current ticker, so that I see the trade overlay without extra navigation.
#### Acceptance Criteria
1. WHEN the TickerDetail_Page loads for a symbol, THE TickerDetail_Page SHALL fetch trade setups from the trades API and filter for setups matching the current symbol
2. WHEN a matching TradeSetup is found, THE TickerDetail_Page SHALL pass the trade setup data to the CandlestickChart_Component as a prop
3. WHEN a matching TradeSetup is found, THE TickerDetail_Page SHALL display a trade setup summary card below the chart showing direction, entry price, stop-loss, target, and R:R ratio
4. IF the trades API request fails, THEN THE TickerDetail_Page SHALL render the chart without trade overlays and log the error without disrupting the page
5. IF multiple TradeSetups exist for the same symbol, THEN THE TickerDetail_Page SHALL use the most recently detected setup (latest detected_at)

View File

@@ -0,0 +1,142 @@
# Implementation Plan: Score Transparency & Trade Overlay
## Overview
Extend the scoring API and UI to expose full score breakdowns (sub-scores, weights, raw values, formulas) for each dimension and the composite score. Add trade setup chart overlays (entry, stop-loss, take-profit zones) to the candlestick chart on the ticker detail page. Backend changes are in Python/FastAPI, frontend in React/TypeScript.
## Tasks
- [x] 1. Add score breakdown schemas and refactor scoring service
- [x] 1.1 Add breakdown Pydantic models to `app/schemas/score.py`
- Add `SubScoreResponse`, `ScoreBreakdownResponse`, `CompositeBreakdownResponse` models
- Extend `DimensionScoreResponse` with optional `breakdown: ScoreBreakdownResponse` field
- Extend `ScoreResponse` with optional `composite_breakdown: CompositeBreakdownResponse` field
- _Requirements: 1.1, 1.7_
- [x] 1.2 Refactor `_compute_technical_score` in `app/services/scoring_service.py` to return breakdown
- Change return type to `tuple[float | None, ScoreBreakdown | None]`
- Return sub-scores for ADX (weight 0.4), EMA (weight 0.3), RSI (weight 0.3) with raw indicator values
- Include formula description string
- Add unavailable sub-scores with reason when data is insufficient
- _Requirements: 1.2, 1.8_
- [x] 1.3 Refactor `_compute_sentiment_score` to return breakdown
- Return record count, decay rate, lookback window, and weighted average formula parameters as sub-score metadata
- _Requirements: 1.3, 1.8_
- [x] 1.4 Refactor `_compute_fundamental_score` to return breakdown
- Return PE Ratio, Revenue Growth, Earnings Surprise sub-scores with raw metric values and normalization formula
- _Requirements: 1.4, 1.8_
- [x] 1.5 Refactor `_compute_momentum_score` to return breakdown
- Return 5-day ROC (weight 0.5) and 20-day ROC (weight 0.5) sub-scores with raw ROC percentages
- _Requirements: 1.5, 1.8_
- [x] 1.6 Refactor `_compute_sr_quality_score` to return breakdown
- Return strong count (max 40 pts), proximity (max 30 pts), avg strength (max 30 pts) sub-scores with input values
- _Requirements: 1.6, 1.8_
- [x] 1.7 Update `get_score` to assemble composite breakdown and pass dimension breakdowns through
- Build `CompositeBreakdownResponse` with original weights, available/missing dimensions, re-normalized weights, and formula
- Wire dimension breakdowns into the response dict
- _Requirements: 1.7, 3.2_
- [x] 1.8 Update `read_score` in `app/routers/scores.py` to populate breakdown fields from service response
- Map breakdown dicts from service into the new Pydantic response models
- _Requirements: 1.1_
- [ ]* 1.9 Write property test: Dimension breakdown contains correct sub-scores (Property 1)
- **Property 1: Dimension breakdown contains correct sub-scores**
- Use `hypothesis` to generate valid input data for each dimension type
- Verify returned breakdown has expected sub-score names, correct weights, and non-null raw values
- **Validates: Requirements 1.1, 1.2, 1.3, 1.4, 1.5, 1.6**
- [ ]* 1.10 Write property test: Composite re-normalization correctness (Property 2)
- **Property 2: Composite re-normalization correctness**
- Use `hypothesis` to generate random subsets of dimensions with random weights
- Verify re-normalized weights sum to 1.0 and each equals `original_weight / sum(available_weights)`
- **Validates: Requirements 1.7, 3.2**
- [x] 2. Checkpoint — Backend score breakdown
- Ensure all tests pass, ask the user if questions arise.
- [x] 3. Add frontend types and DimensionBreakdownPanel component
- [x] 3.1 Extend frontend types in `frontend/src/lib/types.ts`
- Add `SubScore`, `ScoreBreakdown`, `CompositeBreakdown` interfaces
- Extend `DimensionScoreDetail` with optional `breakdown` field
- Extend `ScoreResponse` with optional `composite_breakdown` field
- _Requirements: 1.1, 1.7_
- [x] 3.2 Create `frontend/src/components/ticker/DimensionBreakdownPanel.tsx`
- Expandable panel showing sub-score rows: name, score value, weight badge, raw input value
- Formula description text section
- Muted "unavailable" labels for missing sub-scores with reason
- _Requirements: 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8_
- [x] 3.3 Modify `frontend/src/components/ui/ScoreCard.tsx` for composite transparency
- Make each dimension row expandable, rendering `DimensionBreakdownPanel` when expanded
- Show dimension weight next to each bar
- Show missing dimensions with muted styling and "redistributed" indicator
- Add tooltip/inline text explaining weighted average with re-normalization
- _Requirements: 3.1, 3.2, 3.3_
- [ ]* 3.4 Write property test: Dimension breakdown UI rendering completeness (Property 3)
- **Property 3: Dimension breakdown UI rendering completeness**
- Use `fast-check` to generate random `ScoreBreakdown` objects with 1-5 sub-scores
- Render `DimensionBreakdownPanel` and verify DOM contains exactly N sub-score rows with all required fields
- **Validates: Requirements 2.1**
- [ ]* 3.5 Write property test: Composite weight display (Property 4)
- **Property 4: Composite weight display**
- Use `fast-check` to generate random score responses with random available/missing dimension combinations
- Render `ScoreCard` and verify weight labels present and missing dimensions visually distinct
- **Validates: Requirements 3.1, 3.2**
- [x] 4. Checkpoint — Score transparency UI
- Ensure all tests pass, ask the user if questions arise.
- [x] 5. Add trade setup chart overlay
- [x] 5.1 Modify `frontend/src/components/charts/CandlestickChart.tsx` to accept and render trade overlay
- Add optional `tradeSetup?: TradeSetup` prop
- Draw entry price as dashed horizontal line (blue/white) spanning full chart width
- Draw stop-loss zone as red semi-transparent rectangle between entry and stop-loss
- Draw take-profit zone as green semi-transparent rectangle between entry and target
- Include entry, stop-loss, target in y-axis price range calculation
- Add hover tooltip showing direction, entry, stop, target, R:R ratio
- Render no overlay when prop is absent
- _Requirements: 4.1, 4.2, 4.3, 4.4, 4.5, 4.6_
- [ ]* 5.2 Write property test: Trade overlay y-axis range includes all trade levels (Property 5)
- **Property 5: Trade overlay y-axis range includes all trade levels**
- Use `fast-check` to generate random OHLCV data and random trade setups
- Extract the y-axis range computation logic and verify all three trade levels fall within `[lo, hi]`
- **Validates: Requirements 4.4**
- [x] 6. Integrate trade setup data on ticker detail page
- [x] 6.1 Update `frontend/src/hooks/useTickerDetail.ts` to include trades data
- Add trades query to the hook return value
- _Requirements: 5.1_
- [x] 6.2 Modify `frontend/src/pages/TickerDetailPage.tsx` to wire trade overlay
- Fetch trade setups via `useTrades()`, filter for current symbol, pick latest by `detected_at`
- Pass `tradeSetup` prop to `CandlestickChart`
- Render trade setup summary card below chart (direction, entry, stop, target, R:R)
- Handle trades API failure gracefully — chart renders without overlay, error logged
- _Requirements: 5.1, 5.2, 5.3, 5.4, 5.5_
- [ ]* 6.3 Write property test: Trade setup selection picks latest matching symbol (Property 6)
- **Property 6: Trade setup selection picks latest matching symbol**
- Use `fast-check` to generate random lists of trade setups with random symbols and timestamps
- Verify selection logic returns the latest setup for the target symbol, or null if no match
- **Validates: Requirements 5.1, 5.5**
- [x] 7. Final checkpoint — Ensure all tests pass
- Ensure all tests pass, ask the user if questions arise.
## Notes
- Tasks marked with `*` are optional and can be skipped for faster MVP
- Each task references specific requirements for traceability
- Property tests use `hypothesis` (Python) and `fast-check` (TypeScript) with minimum 100 iterations
- No new database tables — breakdowns are computed on-the-fly from existing data
- Trade overlay uses the existing `lightweight-charts` rendering pipeline, no new library needed

View File

@@ -0,0 +1 @@
{"specId": "b047fbd7-17a8-437c-8c32-ebc8482b2aba", "workflowType": "design-first", "specType": "feature"}

View File

@@ -0,0 +1,391 @@
# Design Document: UX Improvements
## Overview
This feature addresses three UX pain points across the application: (1) unbalanced S/R zone selection that favors one side based on ticker trend, (2) a Trade Scanner page that lacks explanatory context and detailed R:R analysis, and (3) a Rankings page weights form that requires awkward decimal input. The changes span both backend zone-selection logic and frontend presentation components.
## Architecture
```mermaid
graph TD
subgraph Backend
A[sr_service.py<br/>cluster_sr_zones] -->|balanced selection| B[sr_levels router]
B -->|zones + filtered levels| C[API Response]
end
subgraph Frontend - Ticker Detail
C --> D[CandlestickChart]
C --> E[S/R Levels Table<br/>filtered to chart zones]
end
subgraph Frontend - Scanner
F[ScannerPage] --> G[Explainer Banner]
F --> H[TradeTable<br/>expanded columns]
end
subgraph Frontend - Rankings
I[RankingsPage] --> J[WeightsForm<br/>slider-based input]
end
```
## Components and Interfaces
### Component 1: Balanced S/R Zone Selection (Backend)
**Purpose**: Ensure `cluster_sr_zones()` returns a mix of both support and resistance zones instead of only the strongest zones regardless of type.
**Current behavior**: Zones are sorted by strength descending and the top N are returned. For a strongly bullish ticker, all top zones may be support; for bearish, all resistance.
**Proposed algorithm**:
```mermaid
sequenceDiagram
participant Caller
participant cluster_sr_zones
participant ZonePool
Caller->>cluster_sr_zones: levels, current_price, max_zones=6
cluster_sr_zones->>ZonePool: Cluster all levels into zones
cluster_sr_zones->>ZonePool: Split into support[] and resistance[]
cluster_sr_zones->>ZonePool: Sort each list by strength desc
cluster_sr_zones->>ZonePool: Interleave pick (round-robin by strength)
cluster_sr_zones-->>Caller: balanced zones (e.g. 3S + 3R)
```
**Selection rules**:
1. Cluster all levels into zones (existing merge logic unchanged).
2. Tag each zone as support or resistance (existing logic).
3. Split zones into two pools: `support_zones` and `resistance_zones`, each sorted by strength descending.
4. Interleave selection: alternate picking the strongest remaining zone from each pool until `max_zones` is reached.
5. If one pool is exhausted, fill remaining slots from the other pool.
6. Final result sorted by strength descending for consistent ordering.
This naturally produces balanced output (e.g., 3+3 for max_zones=6) while gracefully degrading when one side has fewer strong zones (e.g., 1R + 5S if only 1 resistance zone exists).
**Interface change to `SRLevelResponse`**:
Add a `visible_levels` field that contains only the individual S/R levels whose price falls within one of the returned zones. This allows the frontend table to show only what's on the chart.
```python
class SRLevelResponse(BaseModel):
symbol: str
levels: list[SRLevelResult] # all levels (unchanged, for backward compat)
zones: list[SRZoneResult] # balanced zones shown on chart
visible_levels: list[SRLevelResult] # levels within returned zones only
count: int
```
**Responsibilities**:
- Guarantee both support and resistance representation when both exist
- Compute `visible_levels` by filtering `levels` to those within zone boundaries
- Maintain backward compatibility (existing `levels` field unchanged)
---
### Component 2: S/R Levels Table Filtering (Frontend)
**Purpose**: The S/R levels table below the chart currently shows all detected levels. It should only show levels that correspond to zones visible on the chart.
**Current behavior**: `TickerDetailPage` renders `sortedLevels` from `srLevels.data.levels` — the full unfiltered list.
**Proposed change**: Use `srLevels.data.visible_levels` instead of `srLevels.data.levels` for the table. The chart continues to receive `zones` as before.
```mermaid
graph LR
A[API Response] -->|zones| B[CandlestickChart]
A -->|visible_levels| C[S/R Levels Table]
A -->|levels| D[Other consumers<br/>backward compat]
```
**Interface**:
```typescript
// Updated SRLevelResponse type
interface SRLevelResponse {
symbol: string;
levels: SRLevel[]; // all levels
zones: SRZone[]; // balanced zones on chart
visible_levels: SRLevel[]; // only levels within chart zones
count: number;
}
```
**Responsibilities**:
- Render only `visible_levels` in the table
- Keep table sorted by strength descending
- Show zone type color coding (green for support, red for resistance)
---
### Component 3: Trade Scanner Explainer & R:R Analysis (Frontend)
**Purpose**: Add contextual explanation to the Scanner page and surface detailed trade analysis data (entry, stop-loss, target, R:R, risk amount, reward amount) so users can evaluate setups.
**Best practices for R:R presentation** (informed by trading UX conventions):
- Show entry price, stop-loss, and take-profit (target) as the core trio
- Display R:R ratio prominently with color coding (green ≥ 3:1, amber ≥ 2:1, red < 2:1)
- Show absolute risk and reward amounts (dollar values) so traders can size positions
- Include percentage distance from entry to stop and entry to target
- Visual risk/reward bar showing proportional risk vs reward
**Explainer banner content**: A brief description of what the scanner does — it scans tracked tickers for asymmetric risk-reward trade setups using S/R levels as targets and ATR-based stops.
```mermaid
graph TD
subgraph ScannerPage
A[Explainer Banner] --> B[Filter Controls]
B --> C[TradeTable]
end
subgraph TradeTable Columns
D[Symbol]
E[Direction]
F[Entry Price]
G[Stop Loss]
H[Target / TP]
I[Risk $]
J[Reward $]
K[R:R Ratio]
L[% to Stop]
M[% to Target]
N[Score]
O[Detected]
end
```
**New computed fields** (frontend-only, derived from existing data):
```typescript
// Computed per trade row — no backend changes needed
interface TradeAnalysis {
risk_amount: number; // |entry_price - stop_loss|
reward_amount: number; // |target - entry_price|
stop_pct: number; // risk_amount / entry_price * 100
target_pct: number; // reward_amount / entry_price * 100
}
```
**Responsibilities**:
- Render explainer banner at top of ScannerPage
- Compute risk/reward amounts and percentages client-side
- Add new columns to TradeTable: Risk $, Reward $, % to Stop, % to Target
- Color-code R:R ratio values (green ≥ 3, amber ≥ 2, red < 2)
---
### Component 4: Rankings Weights Slider Form (Frontend)
**Purpose**: Replace the raw decimal number inputs in `WeightsForm` with range sliders using whole-number values (0100) for better UX.
**Current behavior**: Each weight is a `<input type="number" step="any">` accepting arbitrary decimals. Users must type values like `0.25` which is error-prone.
**Proposed UX**:
```mermaid
graph LR
subgraph Current
A[Number Input<br/>step=any<br/>e.g. 0.25]
end
subgraph Proposed
B[Range Slider 0-100<br/>with numeric display]
C[Live value label]
B --> C
end
```
**Design**:
- Each weight gets a horizontal range slider (`<input type="range" min={0} max={100} step={1}>`)
- Current value displayed next to the slider as a whole number
- On submit, values are normalized to sum to 1.0 before sending to the API (divide each by total)
- Visual feedback: slider track colored proportionally
- Label shows the weight name (humanized) and current value
**Normalization logic**:
```
On submit:
total = sum of all slider values
if total > 0:
normalized[key] = slider_value[key] / total
else:
normalized[key] = 0
```
This means a user setting all sliders to 50 gets equal weights (each 1/N), and setting one to 100 and others to 0 gives that dimension full weight. The UX is intuitive — higher number = more importance.
**Interface**:
```typescript
interface WeightsFormProps {
weights: Record<string, number>; // API values (0-1 decimals)
}
// Internal state uses whole numbers 0-100
// Convert on mount: Math.round(apiWeight * 100)
// Convert on submit: sliderValue / sum(allSliderValues)
```
**Responsibilities**:
- Convert API decimal weights to 0100 scale on mount
- Render slider per weight dimension with live value display
- Normalize back to decimal weights on submit
- Maintain existing mutation hook (`useUpdateWeights`)
---
## Data Models
### Updated SRLevelResponse (Backend)
```python
class SRLevelResponse(BaseModel):
symbol: str
levels: list[SRLevelResult] # all detected levels
zones: list[SRZoneResult] # balanced S/R zones for chart
visible_levels: list[SRLevelResult] # levels within chart zones
count: int # total level count
```
**Validation Rules**:
- `visible_levels` is a subset of `levels`
- Each entry in `visible_levels` has a price within the bounds of at least one zone
- `zones` contains at most `max_zones` entries
- When both support and resistance zones exist, `zones` contains at least one of each type
### TradeAnalysis (Frontend — computed, not persisted)
```typescript
interface TradeAnalysis {
risk_amount: number; // always positive
reward_amount: number; // always positive
stop_pct: number; // percentage, always positive
target_pct: number; // percentage, always positive
}
```
**Validation Rules**:
- All values are non-negative
- `risk_amount = Math.abs(entry_price - stop_loss)`
- `reward_amount = Math.abs(target - entry_price)`
---
## Error Handling
### Scenario 1: No zones of one type exist
**Condition**: All S/R levels cluster on one side of current price (e.g., price at all-time high — no resistance levels).
**Response**: Fill all `max_zones` slots from the available type. `visible_levels` reflects only that type.
**Recovery**: No special handling needed — the balanced algorithm gracefully fills from the available pool.
### Scenario 2: Zero S/R levels
**Condition**: No S/R levels detected for a ticker.
**Response**: Return empty `zones`, empty `visible_levels`, empty `levels`. Chart renders without overlays. Table section hidden.
**Recovery**: User can click "Fetch Data" to trigger recalculation.
### Scenario 3: Weight sliders all set to zero
**Condition**: User drags all weight sliders to 0.
**Response**: Disable the submit button and show a validation message ("At least one weight must be greater than zero").
**Recovery**: User adjusts at least one slider above 0.
---
## Testing Strategy
### Unit Testing Approach
- Test `cluster_sr_zones()` with balanced selection: verify mixed output when both types exist
- Test edge cases: all support, all resistance, single zone, empty input
- Test `visible_levels` filtering: levels within zone bounds included, others excluded
- Test weight normalization: verify sum-to-1 property, all-zero guard, single-weight case
### Property-Based Testing Approach
**Property Test Library**: Hypothesis (Python backend), fast-check (frontend)
- For any input to `cluster_sr_zones()` with both support and resistance levels present, the output contains at least one of each type (when max_zones ≥ 2)
- For any set of slider values where at least one > 0, normalized weights sum to 1.0 (within floating-point tolerance)
- `visible_levels` is always a subset of `levels`
### Integration Testing Approach
- E2E test: load ticker detail page, verify chart zones contain both types, verify table matches chart zones
- E2E test: load scanner page, verify explainer text visible, verify computed columns present
- E2E test: load rankings page, verify sliders render, adjust slider, submit, verify API call with normalized weights
---
## Correctness Properties
*A property is a characteristic or behavior that should hold true across all valid executions of a system — essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.*
### Property 1: Balanced zone selection guarantees both types
*For any* set of S/R levels that produce at least one support zone and at least one resistance zone, and any `max_zones` ≥ 2, the output of `cluster_sr_zones()` shall contain at least one support zone and at least one resistance zone.
**Validates: Requirement 1.1**
### Property 2: Interleave selection correctness
*For any* set of S/R levels producing support zones S (sorted by strength desc) and resistance zones R (sorted by strength desc), the zones selected by `cluster_sr_zones()` shall match the result of round-robin picking from S and R alternately (strongest first from each pool) until `max_zones` is reached or both pools are exhausted.
**Validates: Requirements 1.2, 1.3**
### Property 3: Zone output is sorted by strength
*For any* input to `cluster_sr_zones()`, the returned zones list shall be sorted by strength in descending order.
**Validates: Requirement 1.4**
### Property 4: Visible levels are a subset within zone bounds
*For any* SRLevelResponse, every entry in `visible_levels` shall (a) also appear in `levels`, and (b) have a `price_level` that falls within the `[low, high]` range of at least one entry in `zones`.
**Validates: Requirements 2.1, 2.2**
### Property 5: Trade analysis computation correctness
*For any* trade setup with positive `entry_price`, `stop_loss`, and `target`, the computed Trade_Analysis values shall satisfy: `risk_amount == |entry_price - stop_loss|`, `reward_amount == |target - entry_price|`, `stop_pct == risk_amount / entry_price * 100`, and `target_pct == reward_amount / entry_price * 100`.
**Validates: Requirements 5.2, 5.3, 5.4, 5.5**
### Property 6: Weight conversion round-trip
*For any* decimal weight value `w` in [0, 1], converting to slider scale via `Math.round(w * 100)` and then normalizing back (dividing by the sum of all slider values) shall preserve the relative proportions of the original weights within floating-point tolerance.
**Validates: Requirement 6.3**
### Property 7: Normalized weights sum to one
*For any* set of slider values (integers 0100) where at least one value is greater than zero, the normalized weights (each divided by the sum of all values) shall sum to 1.0 within floating-point tolerance (±1e-9).
**Validates: Requirement 7.1**
---
## Performance Considerations
- Balanced zone selection adds negligible overhead — it's a simple split + interleave over an already-small list (typically < 50 zones)
- `visible_levels` filtering is O(levels × zones), both small — no concern
- Frontend computed columns (risk/reward amounts) are derived inline per row — trivial cost
- Slider rendering uses native `<input type="range">` — no performance impact vs current number inputs
---
## Security Considerations
- No new API endpoints or authentication changes
- Weight normalization happens client-side before submission; backend should still validate that weights are non-negative and sum to ~1.0
- No user-generated content introduced (explainer text is static)
---
## Dependencies
- No new external libraries required
- Backend: existing FastAPI, Pydantic, SQLAlchemy stack
- Frontend: existing React, Recharts, TanStack Query stack
- Slider styling can use Tailwind CSS utilities (already in project) — no additional UI library needed

View File

@@ -0,0 +1,107 @@
# Requirements Document
## Introduction
This specification covers three UX improvements to the stock signal platform: (1) balanced support/resistance zone selection that ensures both zone types are represented on the chart, with a filtered levels table; (2) a Trade Scanner page enhanced with an explainer banner and detailed risk/reward analysis columns; and (3) a Rankings page weights form that replaces decimal number inputs with intuitive range sliders and automatic normalization.
## Glossary
- **SR_Service**: The backend service (`sr_service.py`) containing `cluster_sr_zones()` that clusters, scores, and selects S/R zones.
- **SR_API**: The FastAPI router endpoint (`/sr-levels/{symbol}`) that returns S/R levels and zones for a ticker.
- **SRLevelResponse**: The Pydantic response model returned by the SR_API containing levels, zones, and metadata.
- **Zone_Selector**: The interleave-based selection logic within `cluster_sr_zones()` that picks zones from support and resistance pools alternately.
- **Visible_Levels**: The subset of all detected S/R levels whose price falls within the bounds of at least one returned zone.
- **Ticker_Detail_Page**: The frontend page (`TickerDetailPage.tsx`) displaying chart, scores, sentiment, fundamentals, and S/R data for a single ticker.
- **SR_Levels_Table**: The HTML table on the Ticker_Detail_Page that lists individual S/R levels sorted by strength.
- **Scanner_Page**: The frontend page (`ScannerPage.tsx`) displaying trade setups with filtering and sorting.
- **Trade_Table**: The table component (`TradeTable.tsx`) rendering trade setup rows on the Scanner_Page.
- **Explainer_Banner**: A static informational banner at the top of the Scanner_Page describing what the scanner does.
- **Trade_Analysis**: A set of computed fields (risk amount, reward amount, stop percentage, target percentage) derived client-side from each trade setup row.
- **Rankings_Page**: The frontend page displaying ticker rankings with configurable scoring weights.
- **Weights_Form**: The form component (`WeightsForm.tsx`) on the Rankings_Page for adjusting scoring dimension weights.
- **Weight_Slider**: A range input (0100) replacing the current decimal number input for each scoring weight dimension.
- **Normalization**: The process of dividing each slider value by the sum of all slider values to produce decimal weights that sum to 1.0.
## Requirements
### Requirement 1: Balanced Zone Selection
**User Story:** As a trader, I want the S/R zone selection to include both support and resistance zones when both exist, so that I get a balanced view of key price levels regardless of the ticker's trend direction.
#### Acceptance Criteria
1. WHEN both support and resistance zones exist and `max_zones` is 2 or greater, THE Zone_Selector SHALL return at least one support zone and at least one resistance zone.
2. THE Zone_Selector SHALL select zones by alternating picks from the support pool and the resistance pool, each sorted by strength descending, until `max_zones` is reached.
3. WHEN one pool is exhausted before `max_zones` is reached, THE Zone_Selector SHALL fill the remaining slots from the other pool in strength-descending order.
4. THE Zone_Selector SHALL sort the final selected zones by strength descending before returning them.
5. WHEN no S/R levels are provided, THE SR_Service SHALL return an empty zones list.
6. WHEN `max_zones` is zero or negative, THE SR_Service SHALL return an empty zones list.
### Requirement 2: Visible Levels Filtering
**User Story:** As a trader, I want the API to provide a filtered list of S/R levels that correspond to the zones shown on the chart, so that the levels table only shows relevant data.
#### Acceptance Criteria
1. THE SRLevelResponse SHALL include a `visible_levels` field containing only the S/R levels whose price falls within the bounds of at least one returned zone.
2. THE `visible_levels` field SHALL be a subset of the `levels` field in the same response.
3. THE SRLevelResponse SHALL continue to include the full `levels` field for backward compatibility.
4. WHEN the zones list is empty, THE SRLevelResponse SHALL return an empty `visible_levels` list.
### Requirement 3: S/R Levels Table Filtering
**User Story:** As a trader, I want the S/R levels table below the chart to show only levels that correspond to zones visible on the chart, so that the table and chart are consistent.
#### Acceptance Criteria
1. THE SR_Levels_Table SHALL render levels from the `visible_levels` field of the API response instead of the full `levels` field.
2. THE SR_Levels_Table SHALL sort displayed levels by strength descending.
3. THE SR_Levels_Table SHALL color-code each level row green for support and red for resistance.
4. WHEN `visible_levels` is empty, THE Ticker_Detail_Page SHALL hide the SR_Levels_Table section.
### Requirement 4: Trade Scanner Explainer Banner
**User Story:** As a user, I want to see a brief explanation of what the Trade Scanner does when I visit the page, so that I understand the purpose and methodology of the displayed trade setups.
#### Acceptance Criteria
1. THE Scanner_Page SHALL display an Explainer_Banner above the filter controls.
2. THE Explainer_Banner SHALL contain static text describing that the scanner identifies asymmetric risk-reward trade setups using S/R levels as targets and ATR-based stops.
3. THE Explainer_Banner SHALL be visible on initial page load without requiring user interaction.
### Requirement 5: Trade Scanner R:R Analysis Columns
**User Story:** As a trader, I want to see detailed risk/reward analysis data (risk amount, reward amount, percentage distances, and color-coded R:R ratio) for each trade setup, so that I can evaluate and compare setups at a glance.
#### Acceptance Criteria
1. THE Trade_Table SHALL display the following additional columns: Risk $ (absolute risk amount), Reward $ (absolute reward amount), % to Stop (percentage distance from entry to stop-loss), and % to Target (percentage distance from entry to target).
2. THE Trade_Analysis risk_amount SHALL be computed as the absolute difference between entry_price and stop_loss.
3. THE Trade_Analysis reward_amount SHALL be computed as the absolute difference between target and entry_price.
4. THE Trade_Analysis stop_pct SHALL be computed as risk_amount divided by entry_price multiplied by 100.
5. THE Trade_Analysis target_pct SHALL be computed as reward_amount divided by entry_price multiplied by 100.
6. WHEN the R:R ratio is 3.0 or greater, THE Trade_Table SHALL display the R:R value with green color coding.
7. WHEN the R:R ratio is 2.0 or greater but less than 3.0, THE Trade_Table SHALL display the R:R value with amber color coding.
8. WHEN the R:R ratio is less than 2.0, THE Trade_Table SHALL display the R:R value with red color coding.
### Requirement 6: Rankings Weight Slider Input
**User Story:** As a user, I want to adjust scoring weights using range sliders with whole-number values instead of typing decimal numbers, so that the input is intuitive and less error-prone.
#### Acceptance Criteria
1. THE Weights_Form SHALL render each weight dimension as a Weight_Slider with a range of 0 to 100 and a step of 1.
2. THE Weights_Form SHALL display the current whole-number value next to each Weight_Slider.
3. WHEN the Weights_Form receives API weight values (decimals between 0 and 1), THE Weights_Form SHALL convert each value to the 0100 scale by multiplying by 100 and rounding to the nearest integer.
4. THE Weights_Form SHALL display a humanized label for each weight dimension by replacing underscores with spaces.
### Requirement 7: Weight Normalization on Submit
**User Story:** As a user, I want my slider values to be automatically normalized to valid decimal weights when I submit, so that I don't need to manually ensure they sum to 1.0.
#### Acceptance Criteria
1. WHEN the user submits the Weights_Form and at least one slider value is greater than zero, THE Weights_Form SHALL normalize each slider value by dividing it by the sum of all slider values.
2. WHEN all slider values are zero, THE Weights_Form SHALL disable the submit button.
3. WHEN all slider values are zero, THE Weights_Form SHALL display a validation message stating that at least one weight must be greater than zero.
4. THE Weights_Form SHALL send the normalized decimal weights to the API using the existing mutation hook.

View File

@@ -0,0 +1,124 @@
# Implementation Plan: UX Improvements
## Overview
Implement four UX improvements: balanced S/R zone selection in the backend, visible levels filtering (backend + frontend), Trade Scanner explainer banner and R:R analysis columns, and Rankings weight slider form. Backend changes use Python (FastAPI/Pydantic), frontend changes use TypeScript (React).
## Tasks
- [x] 1. Implement balanced S/R zone selection in `cluster_sr_zones()`
- [x] 1.1 Refactor `cluster_sr_zones()` to use interleave-based balanced selection
- In `app/services/sr_service.py`, after clustering and computing zones, split zones into `support_zones` and `resistance_zones` pools sorted by strength descending
- Implement round-robin interleave picking: alternate strongest from each pool until `max_zones` is reached or both pools exhausted
- If one pool is exhausted, fill remaining slots from the other pool
- Sort final selected zones by strength descending before returning
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5, 1.6_
- [ ]* 1.2 Write property test: balanced zone selection guarantees both types
- **Property 1: Balanced zone selection guarantees both types**
- **Validates: Requirement 1.1**
- In `tests/unit/test_cluster_sr_zones.py`, use Hypothesis to generate sets of levels with at least one support and one resistance zone, verify output contains at least one of each type when `max_zones >= 2`
- [ ]* 1.3 Write property test: interleave selection correctness
- **Property 2: Interleave selection correctness**
- **Validates: Requirements 1.2, 1.3**
- Verify the selected zones match the expected round-robin interleave from support and resistance pools
- [ ]* 1.4 Write property test: zone output sorted by strength
- **Property 3: Zone output is sorted by strength**
- **Validates: Requirement 1.4**
- For any input, verify returned zones are sorted by strength descending
- [x] 1.5 Update existing unit tests for balanced selection behavior
- Update `tests/unit/test_cluster_sr_zones.py` to add tests for: mixed support/resistance input produces balanced output, all-support input fills from support only, all-resistance input fills from resistance only, single zone edge case
- _Requirements: 1.1, 1.2, 1.3, 1.5, 1.6_
- [x] 2. Implement `visible_levels` filtering in backend
- [x] 2.1 Add `visible_levels` field to `SRLevelResponse` schema
- In `app/schemas/sr_level.py`, add `visible_levels: list[SRLevelResult] = []` to `SRLevelResponse`
- _Requirements: 2.1, 2.3_
- [x] 2.2 Compute `visible_levels` in the SR levels router
- In `app/routers/sr_levels.py`, after computing zones, filter `level_results` to only those whose `price_level` falls within the `[low, high]` range of at least one zone
- Set the filtered list as `visible_levels` on the `SRLevelResponse`
- When zones list is empty, `visible_levels` should be empty
- _Requirements: 2.1, 2.2, 2.4_
- [ ]* 2.3 Write property test: visible levels subset within zone bounds
- **Property 4: Visible levels are a subset within zone bounds**
- **Validates: Requirements 2.1, 2.2**
- Verify every entry in `visible_levels` appears in `levels` and has a price within at least one zone's `[low, high]` range
- [x] 2.4 Update router unit tests for `visible_levels`
- In `tests/unit/test_sr_levels_router.py`, add tests verifying: `visible_levels` is present in response, `visible_levels` contains only levels within zone bounds, `visible_levels` is empty when zones are empty
- _Requirements: 2.1, 2.2, 2.4_
- [x] 3. Checkpoint - Ensure all backend tests pass
- Ensure all tests pass, ask the user if questions arise.
- [x] 4. Update frontend types and S/R levels table filtering
- [x] 4.1 Add `visible_levels` to frontend `SRLevelResponse` type
- In `frontend/src/lib/types.ts`, add `visible_levels: SRLevel[]` to the `SRLevelResponse` interface
- _Requirements: 2.1_
- [x] 4.2 Update `TickerDetailPage` to use `visible_levels` for the S/R table
- In `frontend/src/pages/TickerDetailPage.tsx`, change `sortedLevels` to derive from `srLevels.data.visible_levels` instead of `srLevels.data.levels`
- Keep sorting by strength descending
- Hide the S/R Levels Table section when `visible_levels` is empty
- Maintain existing color coding (green for support, red for resistance)
- _Requirements: 3.1, 3.2, 3.3, 3.4_
- [x] 5. Add Trade Scanner explainer banner and R:R analysis columns
- [x] 5.1 Add explainer banner to `ScannerPage`
- In `frontend/src/pages/ScannerPage.tsx`, add a static informational banner above the filter controls
- Banner text: describe that the scanner identifies asymmetric risk-reward trade setups using S/R levels as targets and ATR-based stops
- Banner should be visible on initial page load without user interaction
- _Requirements: 4.1, 4.2, 4.3_
- [x] 5.2 Add R:R analysis columns to `TradeTable`
- In `frontend/src/components/scanner/TradeTable.tsx`, add computed columns: Risk $ (`|entry_price - stop_loss|`), Reward $ (`|target - entry_price|`), % to Stop (`risk / entry * 100`), % to Target (`reward / entry * 100`)
- Color-code the existing R:R ratio column: green for ≥ 3.0, amber for ≥ 2.0, red for < 2.0
- Update the `SortColumn` type and `columns` array to include the new columns
- Update `sortTrades` in `ScannerPage.tsx` to handle sorting by new computed columns
- _Requirements: 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8_
- [ ]* 5.3 Write property test: trade analysis computation correctness
- **Property 5: Trade analysis computation correctness**
- **Validates: Requirements 5.2, 5.3, 5.4, 5.5**
- Using fast-check, for any trade with positive entry_price, stop_loss, and target, verify `risk_amount == |entry_price - stop_loss|`, `reward_amount == |target - entry_price|`, `stop_pct == risk_amount / entry_price * 100`, `target_pct == reward_amount / entry_price * 100`
- [x] 6. Convert Rankings weight inputs to sliders
- [x] 6.1 Replace number inputs with range sliders in `WeightsForm`
- In `frontend/src/components/rankings/WeightsForm.tsx`, replace `<input type="number">` with `<input type="range" min={0} max={100} step={1}>`
- On mount, convert API decimal weights to 0100 scale: `Math.round(w * 100)`
- Display current whole-number value next to each slider
- Show humanized label (replace underscores with spaces)
- _Requirements: 6.1, 6.2, 6.3, 6.4_
- [x] 6.2 Implement weight normalization on submit
- On submit, normalize slider values: divide each by the sum of all values
- Disable submit button when all sliders are zero
- Show validation message "At least one weight must be greater than zero" when all are zero
- Send normalized decimal weights via existing `useUpdateWeights` mutation
- _Requirements: 7.1, 7.2, 7.3, 7.4_
- [ ]* 6.3 Write property test: weight conversion round-trip
- **Property 6: Weight conversion round-trip**
- **Validates: Requirement 6.3**
- Using fast-check, verify that converting decimal weights to slider scale and normalizing back preserves relative proportions within floating-point tolerance
- [ ]* 6.4 Write property test: normalized weights sum to one
- **Property 7: Normalized weights sum to one**
- **Validates: Requirement 7.1**
- Using fast-check, for any set of slider values (integers 0100) where at least one > 0, verify normalized weights sum to 1.0 within ±1e-9
- [x] 7. Final checkpoint - Ensure all tests pass
- Ensure all tests pass, ask the user if questions arise.
## Notes
- Tasks marked with `*` are optional and can be skipped for faster MVP
- Backend uses Python (Hypothesis for property tests), frontend uses TypeScript/React (fast-check for property tests)
- Each task references specific requirements for traceability
- Checkpoints ensure incremental validation after backend and full implementation phases
- Property tests validate universal correctness properties from the design document