Files
Dennis Thiessen 0a011d4ce9
Some checks failed
Deploy / lint (push) Failing after 21s
Deploy / test (push) Has been skipped
Deploy / deploy (push) Has been skipped
Big refactoring
2026-03-03 15:20:18 +01:00

2080 lines
65 KiB
Markdown

# Design Document: Intelligent Trade Recommendation System
## Overview
The Intelligent Trade Recommendation System enhances the Signal Dashboard's trade setup generation by providing bidirectional analysis (LONG and SHORT), confidence scoring, multiple price targets with probability estimates, and signal conflict detection. This system transforms raw multi-dimensional signals into actionable trading recommendations suitable for non-professional traders.
### Goals
- Generate both LONG and SHORT trade setups for every ticker with independent confidence scores
- Identify 3-5 price targets at S/R levels with probability estimates for staged profit-taking
- Detect and flag contradictions between sentiment, technical, momentum, and fundamental signals
- Provide clear recommendation summaries with action, reasoning, and risk level
- Maintain performance targets: 500ms per ticker, 10 tickers/second batch processing
### Non-Goals
- Real-time trade execution or order management
- Backtesting or historical performance tracking (deferred to future phase)
- Machine learning-based prediction models
- Integration with external trading platforms
### Key Design Decisions
1. **Extend TradeSetup model** rather than create new tables to maintain backward compatibility
2. **Synchronous recommendation generation** during R:R scanner job (no separate scheduled job)
3. **Quality-score based target selection** combining R:R ratio, S/R strength, and proximity
4. **Rule-based confidence scoring** using dimension score thresholds and alignment checks
5. **JSON fields for flexible data** (targets array, conflict flags) to avoid complex schema changes
## Architecture
### System Context
```mermaid
graph TB
subgraph "Existing System"
RR[R:R Scanner Service]
SCORE[Scoring Service]
SR[S/R Service]
IND[Indicator Service]
SENT[Sentiment Service]
FUND[Fundamental Service]
end
subgraph "New Components"
REC[Recommendation Engine]
DIR[Direction Analyzer]
TGT[Target Generator]
PROB[Probability Estimator]
CONF[Signal Conflict Detector]
end
subgraph "Data Layer"
TS[(TradeSetup Model)]
DS[(DimensionScore)]
SRL[(SRLevel)]
SENT_M[(SentimentScore)]
end
RR --> REC
SCORE --> REC
SR --> REC
REC --> DIR
REC --> TGT
REC --> PROB
REC --> CONF
DIR --> TS
TGT --> TS
PROB --> TS
CONF --> TS
DS --> DIR
DS --> CONF
SRL --> TGT
SRL --> PROB
SENT_M --> CONF
```
### Integration Strategy
The recommendation system integrates into the existing R:R scanner workflow:
1. **Trigger Point**: `rr_scanner_service.scan_ticker()` generates base LONG/SHORT setups
2. **Enhancement Phase**: New `recommendation_service.enhance_trade_setup()` enriches each setup
3. **Persistence**: Extended TradeSetup model stores all recommendation data
4. **API Layer**: Existing `/api/v1/trades` endpoints return enhanced data
This approach ensures:
- Zero breaking changes to existing scanner logic
- Backward compatibility with current TradeSetup consumers
- Single transaction for setup generation and enhancement
- No additional scheduled jobs required
## Components and Interfaces
### Recommendation Engine (recommendation_service.py)
**Responsibility**: Orchestrate the recommendation generation process for a trade setup.
**Interface**:
```python
async def enhance_trade_setup(
db: AsyncSession,
ticker: Ticker,
setup: TradeSetup,
dimension_scores: dict[str, float],
sr_levels: list[SRLevel],
sentiment_classification: str | None,
atr_value: float,
) -> TradeSetup:
"""Enhance a base trade setup with recommendation data.
Args:
db: Database session
ticker: Ticker model instance
setup: Base TradeSetup with direction, entry, stop, target, rr_ratio
dimension_scores: Dict of dimension -> score (technical, sentiment, momentum, etc.)
sr_levels: All S/R levels for the ticker
sentiment_classification: Latest sentiment (bearish/neutral/bullish)
atr_value: Current ATR for volatility adjustment
Returns:
Enhanced TradeSetup with confidence_score, targets, conflict_flags, etc.
"""
```
**Algorithm**:
1. Call `direction_analyzer.calculate_confidence()` to get confidence score
2. Call `target_generator.generate_targets()` to get 3-5 targets
3. Call `probability_estimator.estimate_probabilities()` for each target
4. Call `signal_conflict_detector.detect_conflicts()` to identify contradictions
5. Generate recommendation summary based on confidence and conflicts
6. Update setup model with all recommendation data
7. Return enhanced setup
**Dependencies**: All four sub-components, SystemSetting for thresholds
### Direction Analyzer
**Responsibility**: Calculate confidence scores for LONG and SHORT directions based on signal alignment.
**Interface**:
```python
def calculate_confidence(
direction: str,
dimension_scores: dict[str, float],
sentiment_classification: str | None,
) -> float:
"""Calculate confidence score (0-100%) for a trade direction.
Args:
direction: "long" or "short"
dimension_scores: Dict with keys: technical, sentiment, momentum, fundamental
sentiment_classification: "bearish", "neutral", "bullish", or None
Returns:
Confidence score 0-100%
"""
```
**Algorithm**:
```
Base confidence = 50.0
For LONG direction:
- If technical > 60: add 15 points
- If technical > 70: add additional 10 points
- If momentum > 60: add 15 points
- If sentiment is "bullish": add 15 points
- If fundamental > 60: add 10 points
For SHORT direction:
- If technical < 40: add 15 points
- If technical < 30: add additional 10 points
- If momentum < 40: add 15 points
- If sentiment is "bearish": add 15 points
- If fundamental < 40: add 10 points
Clamp result to [0, 100]
```
**Rationale**: Rule-based scoring provides transparency and predictability. Weights favor technical and momentum (15 points each) as they reflect price action, with sentiment and fundamentals as supporting factors.
### Target Generator
**Responsibility**: Identify 3-5 price targets at S/R levels with classification and R:R calculation.
**Interface**:
```python
def generate_targets(
direction: str,
entry_price: float,
stop_loss: float,
sr_levels: list[SRLevel],
atr_value: float,
) -> list[dict]:
"""Generate multiple price targets for a trade setup.
Args:
direction: "long" or "short"
entry_price: Entry price for the trade
stop_loss: Stop loss price
sr_levels: All S/R levels for the ticker
atr_value: Current ATR value
Returns:
List of target dicts with keys:
- price: Target price level
- distance_from_entry: Absolute distance
- distance_atr_multiple: Distance as multiple of ATR
- rr_ratio: Risk-reward ratio for this target
- classification: "Conservative", "Moderate", or "Aggressive"
- sr_level_id: ID of the S/R level used
- sr_strength: Strength score of the S/R level
"""
```
**Algorithm**:
```
1. Filter S/R levels by direction:
- LONG: resistance levels above entry (type="resistance", price > entry)
- SHORT: support levels below entry (type="support", price < entry)
2. Apply volatility filter:
- Exclude levels within 1x ATR of entry (too close)
- If ATR > 5% of price: include levels up to 10x ATR
- If ATR < 2% of price: limit to levels within 3x ATR
3. Calculate quality score for each candidate:
quality = 0.35 * norm_rr + 0.35 * norm_strength + 0.30 * norm_proximity
where:
- norm_rr = min(rr_ratio / 10.0, 1.0)
- norm_strength = strength / 100.0
- norm_proximity = 1.0 - min(distance / entry, 1.0)
4. Sort candidates by quality score descending
5. Select top 3-5 targets:
- Take top 5 if available
- Minimum 3 required (flag setup if fewer)
6. Classify targets by distance:
- Conservative: nearest 1-2 targets
- Aggressive: furthest 1-2 targets
- Moderate: middle targets
7. Calculate R:R ratio for each target:
risk = abs(entry_price - stop_loss)
reward = abs(target_price - entry_price)
rr_ratio = reward / risk
```
**Rationale**: Quality-based selection ensures targets balance multiple factors. ATR-based filtering adapts to volatility. Classification helps traders plan staged exits.
### Probability Estimator
**Responsibility**: Calculate probability (0-100%) of reaching each price target.
**Interface**:
```python
def estimate_probability(
target: dict,
dimension_scores: dict[str, float],
sentiment_classification: str | None,
direction: str,
config: dict,
) -> float:
"""Estimate probability of reaching a price target.
Args:
target: Target dict from generate_targets()
dimension_scores: Current dimension scores
sentiment_classification: Latest sentiment
direction: "long" or "short"
config: Configuration dict with weights from SystemSetting
Returns:
Probability percentage 0-100%
"""
```
**Algorithm**:
```
Base probability calculation:
1. Distance factor (40% weight):
- Conservative targets (nearest): base = 70%
- Moderate targets (middle): base = 55%
- Aggressive targets (furthest): base = 40%
2. S/R strength factor (30% weight):
- strength >= 80: add 15%
- strength 60-79: add 10%
- strength 40-59: add 5%
- strength < 40: subtract 10%
3. Signal alignment factor (20% weight):
- Check if signals support direction:
* LONG: technical > 60 AND (sentiment bullish OR momentum > 60)
* SHORT: technical < 40 AND (sentiment bearish OR momentum < 40)
- If aligned: add 15%
- If conflicted: subtract 15%
4. Volatility factor (10% weight):
- If distance_atr_multiple > 5: add 5% (high volatility favors distant targets)
- If distance_atr_multiple < 2: add 5% (low volatility favors near targets)
Final probability = base + strength_adj + alignment_adj + volatility_adj
Clamp to [10, 90] (never 0% or 100% to reflect uncertainty)
```
**Configuration Parameters** (stored in SystemSetting):
- `signal_alignment_weight`: Default 0.15 (15%)
- `sr_strength_weight`: Default 0.20 (20%)
- `distance_penalty_factor`: Default 0.10 (10%)
**Rationale**: Multi-factor approach balances distance (primary), S/R quality (secondary), and signal confirmation (tertiary). Clamping to [10, 90] acknowledges market uncertainty.
### Signal Conflict Detector
**Responsibility**: Identify contradictions between sentiment, technical, momentum, and fundamental signals.
**Interface**:
```python
def detect_conflicts(
dimension_scores: dict[str, float],
sentiment_classification: str | None,
) -> list[str]:
"""Detect signal conflicts across dimensions.
Args:
dimension_scores: Dict with technical, sentiment, momentum, fundamental scores
sentiment_classification: "bearish", "neutral", "bullish", or None
Returns:
List of conflict descriptions, e.g.:
- "sentiment-technical: Bearish sentiment conflicts with bullish technical (72)"
- "momentum-technical: Momentum (35) diverges from technical (68) by 33 points"
"""
```
**Algorithm**:
```
Conflicts detected:
1. Sentiment-Technical conflict:
- If sentiment="bearish" AND technical > 60: flag conflict
- If sentiment="bullish" AND technical < 40: flag conflict
- Message: "sentiment-technical: {sentiment} sentiment conflicts with {direction} technical ({score})"
2. Momentum-Technical divergence:
- If abs(momentum - technical) > 30: flag conflict
- Message: "momentum-technical: Momentum ({momentum}) diverges from technical ({technical}) by {diff} points"
3. Sentiment-Momentum conflict:
- If sentiment="bearish" AND momentum > 60: flag conflict
- If sentiment="bullish" AND momentum < 40: flag conflict
- Message: "sentiment-momentum: {sentiment} sentiment conflicts with momentum ({score})"
4. Fundamental-Technical divergence (informational only):
- If abs(fundamental - technical) > 40: flag as "weak conflict"
- Message: "fundamental-technical: Fundamental ({fund}) diverges significantly from technical ({tech})"
Return list of all detected conflicts
```
**Impact on Confidence**:
- Each conflict reduces confidence by 15-25%:
- Sentiment-Technical: -20%
- Momentum-Technical: -15%
- Sentiment-Momentum: -20%
- Fundamental-Technical: -10% (weaker impact)
- Applied in `direction_analyzer.calculate_confidence()` after base calculation
**Rationale**: Conflicts indicate uncertainty and increase risk. Sentiment-technical conflicts are most serious as they represent narrative vs. price action divergence.
## Data Models
### Extended TradeSetup Model
**New Fields**:
```python
class TradeSetup(Base):
__tablename__ = "trade_setups"
# Existing fields (unchanged)
id: Mapped[int] = mapped_column(primary_key=True)
ticker_id: Mapped[int] = mapped_column(ForeignKey("tickers.id", ondelete="CASCADE"))
direction: Mapped[str] = mapped_column(String(10), nullable=False)
entry_price: Mapped[float] = mapped_column(Float, nullable=False)
stop_loss: Mapped[float] = mapped_column(Float, nullable=False)
target: Mapped[float] = mapped_column(Float, nullable=False) # Primary target
rr_ratio: Mapped[float] = mapped_column(Float, nullable=False) # Primary R:R
composite_score: Mapped[float] = mapped_column(Float, nullable=False)
detected_at: Mapped[datetime] = mapped_column(DateTime(timezone=True))
# NEW: Recommendation fields
confidence_score: Mapped[float | None] = mapped_column(Float, nullable=True)
targets_json: Mapped[str | None] = mapped_column(Text, nullable=True)
conflict_flags_json: Mapped[str | None] = mapped_column(Text, nullable=True)
recommended_action: Mapped[str | None] = mapped_column(String(20), nullable=True)
reasoning: Mapped[str | None] = mapped_column(Text, nullable=True)
risk_level: Mapped[str | None] = mapped_column(String(10), nullable=True)
```
**Field Descriptions**:
- `confidence_score`: Float 0-100, confidence in this direction
- `targets_json`: JSON array of target objects (see schema below)
- `conflict_flags_json`: JSON array of conflict strings
- `recommended_action`: Enum-like string: "LONG_HIGH", "LONG_MODERATE", "SHORT_HIGH", "SHORT_MODERATE", "NEUTRAL"
- `reasoning`: Human-readable explanation of recommendation
- `risk_level`: "Low", "Medium", or "High" based on conflicts
**Targets JSON Schema**:
```json
[
{
"price": 150.25,
"distance_from_entry": 5.25,
"distance_atr_multiple": 2.5,
"rr_ratio": 3.5,
"probability": 65.0,
"classification": "Conservative",
"sr_level_id": 42,
"sr_strength": 75
},
...
]
```
**Conflict Flags JSON Schema**:
```json
[
"sentiment-technical: Bearish sentiment conflicts with bullish technical (72)",
"momentum-technical: Momentum (35) diverges from technical (68) by 33 points"
]
```
**Backward Compatibility**:
- All new fields are nullable
- Existing `target` and `rr_ratio` fields remain as primary target data
- Old consumers can ignore new fields
- New consumers use `targets_json` for full target list
### SystemSetting Extensions
**New Configuration Keys**:
```python
# Recommendation thresholds
"recommendation_high_confidence_threshold": 70.0 # % for "High Confidence"
"recommendation_moderate_confidence_threshold": 50.0 # % for "Moderate Confidence"
"recommendation_confidence_diff_threshold": 20.0 # % difference for directional recommendation
# Probability calculation weights
"recommendation_signal_alignment_weight": 0.15 # 15%
"recommendation_sr_strength_weight": 0.20 # 20%
"recommendation_distance_penalty_factor": 0.10 # 10%
# Conflict detection thresholds
"recommendation_momentum_technical_divergence_threshold": 30.0 # points
"recommendation_fundamental_technical_divergence_threshold": 40.0 # points
```
**Access Pattern**:
```python
async def get_recommendation_config(db: AsyncSession) -> dict:
"""Load all recommendation configuration from SystemSetting."""
# Query all keys starting with "recommendation_"
# Return dict with defaults for missing keys
```
### No New Tables Required
The design intentionally avoids new tables to minimize schema complexity:
- TradeSetup extensions handle all recommendation data
- JSON fields provide flexibility for evolving data structures
- SystemSetting stores configuration
- Existing relationships (Ticker → TradeSetup) remain unchanged
## Algorithm Design
### Confidence Scoring Formula
**Detailed Implementation**:
```python
def calculate_confidence(
direction: str,
dimension_scores: dict[str, float],
sentiment_classification: str | None,
conflicts: list[str],
) -> float:
"""Calculate confidence score with conflict penalties."""
base = 50.0
technical = dimension_scores.get("technical", 50.0)
momentum = dimension_scores.get("momentum", 50.0)
fundamental = dimension_scores.get("fundamental", 50.0)
if direction == "long":
# Technical contribution
if technical > 70:
base += 25.0
elif technical > 60:
base += 15.0
# Momentum contribution
if momentum > 70:
base += 20.0
elif momentum > 60:
base += 15.0
# Sentiment contribution
if sentiment_classification == "bullish":
base += 15.0
elif sentiment_classification == "neutral":
base += 5.0
# Fundamental contribution
if fundamental > 60:
base += 10.0
elif direction == "short":
# Technical contribution
if technical < 30:
base += 25.0
elif technical < 40:
base += 15.0
# Momentum contribution
if momentum < 30:
base += 20.0
elif momentum < 40:
base += 15.0
# Sentiment contribution
if sentiment_classification == "bearish":
base += 15.0
elif sentiment_classification == "neutral":
base += 5.0
# Fundamental contribution
if fundamental < 40:
base += 10.0
# Apply conflict penalties
for conflict in conflicts:
if "sentiment-technical" in conflict:
base -= 20.0
elif "momentum-technical" in conflict:
base -= 15.0
elif "sentiment-momentum" in conflict:
base -= 20.0
elif "fundamental-technical" in conflict:
base -= 10.0
return max(0.0, min(100.0, base))
```
**Scoring Breakdown**:
- Base: 50 points (neutral starting point)
- Technical: up to 25 points (most important - reflects price action)
- Momentum: up to 20 points (confirms trend strength)
- Sentiment: up to 15 points (narrative support)
- Fundamental: up to 10 points (value support)
- Maximum possible: 120 points before conflicts
- Conflicts: -10 to -20 points each
### Probability Calculation Formula
**Detailed Implementation**:
```python
def estimate_probability(
target: dict,
dimension_scores: dict[str, float],
sentiment_classification: str | None,
direction: str,
config: dict,
) -> float:
"""Estimate probability of reaching a price target."""
# 1. Base probability from classification (40% weight)
classification = target["classification"]
if classification == "Conservative":
base_prob = 70.0
elif classification == "Moderate":
base_prob = 55.0
else: # Aggressive
base_prob = 40.0
# 2. S/R strength adjustment (30% weight)
strength = target["sr_strength"]
if strength >= 80:
strength_adj = 15.0
elif strength >= 60:
strength_adj = 10.0
elif strength >= 40:
strength_adj = 5.0
else:
strength_adj = -10.0
# 3. Signal alignment adjustment (20% weight)
technical = dimension_scores.get("technical", 50.0)
momentum = dimension_scores.get("momentum", 50.0)
alignment_adj = 0.0
if direction == "long":
if technical > 60 and (sentiment_classification == "bullish" or momentum > 60):
alignment_adj = 15.0
elif technical < 40 or (sentiment_classification == "bearish" and momentum < 40):
alignment_adj = -15.0
elif direction == "short":
if technical < 40 and (sentiment_classification == "bearish" or momentum < 40):
alignment_adj = 15.0
elif technical > 60 or (sentiment_classification == "bullish" and momentum > 60):
alignment_adj = -15.0
# 4. Volatility adjustment (10% weight)
atr_multiple = target["distance_atr_multiple"]
volatility_adj = 0.0
if atr_multiple > 5:
volatility_adj = 5.0 # High volatility favors distant targets
elif atr_multiple < 2:
volatility_adj = 5.0 # Low volatility favors near targets
# Combine all factors
probability = base_prob + strength_adj + alignment_adj + volatility_adj
# Clamp to [10, 90] to reflect uncertainty
return max(10.0, min(90.0, probability))
```
**Probability Ranges by Classification**:
- Conservative: 60-90% (typically 70-85%)
- Moderate: 40-70% (typically 50-65%)
- Aggressive: 10-50% (typically 30-45%)
### Signal Alignment Logic
**Implementation**:
```python
def check_signal_alignment(
direction: str,
dimension_scores: dict[str, float],
sentiment_classification: str | None,
) -> tuple[bool, str]:
"""Check if signals align with the trade direction.
Returns:
(is_aligned, description)
"""
technical = dimension_scores.get("technical", 50.0)
momentum = dimension_scores.get("momentum", 50.0)
if direction == "long":
tech_bullish = technical > 60
momentum_bullish = momentum > 60
sentiment_bullish = sentiment_classification == "bullish"
# Need at least 2 of 3 signals aligned
aligned_count = sum([tech_bullish, momentum_bullish, sentiment_bullish])
if aligned_count >= 2:
return True, f"Signals aligned for LONG: technical={technical:.0f}, momentum={momentum:.0f}, sentiment={sentiment_classification}"
else:
return False, f"Mixed signals for LONG: technical={technical:.0f}, momentum={momentum:.0f}, sentiment={sentiment_classification}"
elif direction == "short":
tech_bearish = technical < 40
momentum_bearish = momentum < 40
sentiment_bearish = sentiment_classification == "bearish"
# Need at least 2 of 3 signals aligned
aligned_count = sum([tech_bearish, momentum_bearish, sentiment_bearish])
if aligned_count >= 2:
return True, f"Signals aligned for SHORT: technical={technical:.0f}, momentum={momentum:.0f}, sentiment={sentiment_classification}"
else:
return False, f"Mixed signals for SHORT: technical={technical:.0f}, momentum={momentum:.0f}, sentiment={sentiment_classification}"
return False, "Unknown direction"
```
**Alignment Criteria**:
- LONG: At least 2 of [technical > 60, momentum > 60, sentiment bullish]
- SHORT: At least 2 of [technical < 40, momentum < 40, sentiment bearish]
- Fundamental score is informational but not required for alignment
## API Design
### Enhanced Trade Setup Endpoints
**GET /api/v1/trades**
Returns all trade setups with recommendation data.
**Query Parameters**:
- `direction`: Optional filter ("long" or "short")
- `min_confidence`: Optional minimum confidence score (0-100)
- `recommended_action`: Optional filter ("LONG_HIGH", "LONG_MODERATE", "SHORT_HIGH", "SHORT_MODERATE", "NEUTRAL")
**Response Schema**:
```json
{
"status": "success",
"data": [
{
"id": 1,
"symbol": "AAPL",
"direction": "long",
"entry_price": 145.00,
"stop_loss": 142.50,
"target": 150.00,
"rr_ratio": 2.0,
"composite_score": 75.5,
"detected_at": "2024-01-15T10:30:00Z",
"confidence_score": 72.5,
"recommended_action": "LONG_HIGH",
"reasoning": "Strong technical (75) and bullish sentiment align with upward momentum (68). No major conflicts detected.",
"risk_level": "Low",
"targets": [
{
"price": 147.50,
"distance_from_entry": 2.50,
"distance_atr_multiple": 1.5,
"rr_ratio": 1.0,
"probability": 75.0,
"classification": "Conservative",
"sr_level_id": 42,
"sr_strength": 80
},
{
"price": 150.00,
"distance_from_entry": 5.00,
"distance_atr_multiple": 3.0,
"rr_ratio": 2.0,
"probability": 60.0,
"classification": "Moderate",
"sr_level_id": 43,
"sr_strength": 70
},
{
"price": 155.00,
"distance_from_entry": 10.00,
"distance_atr_multiple": 6.0,
"rr_ratio": 4.0,
"probability": 35.0,
"classification": "Aggressive",
"sr_level_id": 44,
"sr_strength": 60
}
],
"conflict_flags": []
}
]
}
```
**GET /api/v1/trades/{symbol}**
Returns trade setups for a specific ticker (both LONG and SHORT if available).
**Response**: Same schema as above, filtered by symbol.
### Admin Configuration Endpoints
**GET /api/v1/admin/settings/recommendations**
Get current recommendation configuration.
**Response**:
```json
{
"status": "success",
"data": {
"high_confidence_threshold": 70.0,
"moderate_confidence_threshold": 50.0,
"confidence_diff_threshold": 20.0,
"signal_alignment_weight": 0.15,
"sr_strength_weight": 0.20,
"distance_penalty_factor": 0.10,
"momentum_technical_divergence_threshold": 30.0,
"fundamental_technical_divergence_threshold": 40.0
}
}
```
**PUT /api/v1/admin/settings/recommendations**
Update recommendation configuration.
**Request Body**:
```json
{
"high_confidence_threshold": 75.0,
"signal_alignment_weight": 0.20
}
```
**Response**: Updated configuration object.
**Validation**:
- All thresholds must be 0-100
- All weights must be 0-1
- Returns 400 error for invalid values
## Frontend Components
### Ticker Detail Page Enhancement
**Location**: `frontend/src/components/ticker/RecommendationPanel.tsx`
**Component Structure**:
```tsx
interface RecommendationPanelProps {
symbol: string;
longSetup?: TradeSetup;
shortSetup?: TradeSetup;
}
export function RecommendationPanel({ symbol, longSetup, shortSetup }: RecommendationPanelProps) {
// Display recommendation summary at top
// Show LONG and SHORT setups side-by-side
// Highlight recommended direction
// Display targets table for each direction
// Show conflict warnings if present
}
```
**Visual Design**:
- Recommendation summary card at top with large action text and confidence badge
- Two-column layout: LONG setup on left, SHORT setup on right
- Recommended direction has green border and subtle glow
- Non-recommended direction has muted opacity
- Risk level badge: green (Low), yellow (Medium), red (High)
- Targets table with sortable columns
- Conflict warnings in amber alert box
**Data Flow**:
```tsx
// In TickerDetailPage.tsx
const { data: tradeSetups } = useTradeSetups(symbol);
const longSetup = tradeSetups?.find(s => s.direction === 'long');
const shortSetup = tradeSetups?.find(s => s.direction === 'short');
<RecommendationPanel
symbol={symbol}
longSetup={longSetup}
shortSetup={shortSetup}
/>
```
### Scanner Page Enhancement
**Location**: `frontend/src/components/scanner/TradeTable.tsx`
**New Columns**:
- Recommended Action (with badge)
- Confidence Score (with progress bar)
- Best Target (highest probability target)
- Risk Level (with color-coded badge)
**Filtering Controls**:
```tsx
interface ScannerFilters {
direction?: 'long' | 'short';
minConfidence?: number;
recommendedAction?: 'LONG_HIGH' | 'LONG_MODERATE' | 'SHORT_HIGH' | 'SHORT_MODERATE' | 'NEUTRAL';
riskLevel?: 'Low' | 'Medium' | 'High';
}
```
**Table Enhancement**:
```tsx
<Table>
<TableHeader>
<TableRow>
<TableHead>Symbol</TableHead>
<TableHead>Recommended Action</TableHead>
<TableHead>Confidence</TableHead>
<TableHead>Entry</TableHead>
<TableHead>Stop</TableHead>
<TableHead>Best Target</TableHead>
<TableHead>R:R</TableHead>
<TableHead>Risk Level</TableHead>
<TableHead>Composite</TableHead>
</TableRow>
</TableHeader>
<TableBody>
{setups.map(setup => (
<TableRow
key={setup.id}
onClick={() => navigate(`/ticker/${setup.symbol}`)}
className="cursor-pointer hover:bg-white/5"
>
<TableCell>{setup.symbol}</TableCell>
<TableCell>
<RecommendationBadge action={setup.recommended_action} />
</TableCell>
<TableCell>
<ConfidenceBar value={setup.confidence_score} />
</TableCell>
{/* ... other cells ... */}
</TableRow>
))}
</TableBody>
</Table>
```
**Sorting**:
- Default: Confidence score descending
- Secondary: R:R ratio descending
- Allow sorting by any column
### Admin Settings Page Enhancement
**Location**: `frontend/src/components/admin/RecommendationSettings.tsx`
**Form Fields**:
```tsx
<form onSubmit={handleSubmit}>
<section>
<h3>Confidence Thresholds</h3>
<Input
label="High Confidence Threshold (%)"
type="number"
min={0}
max={100}
value={config.high_confidence_threshold}
onChange={...}
/>
<Input
label="Moderate Confidence Threshold (%)"
type="number"
min={0}
max={100}
value={config.moderate_confidence_threshold}
onChange={...}
/>
<Input
label="Confidence Difference Threshold (%)"
type="number"
min={0}
max={100}
value={config.confidence_diff_threshold}
onChange={...}
helpText="Minimum difference between LONG and SHORT confidence for directional recommendation"
/>
</section>
<section>
<h3>Probability Calculation Weights</h3>
<Input
label="Signal Alignment Weight"
type="number"
min={0}
max={1}
step={0.01}
value={config.signal_alignment_weight}
onChange={...}
/>
<Input
label="S/R Strength Weight"
type="number"
min={0}
max={1}
step={0.01}
value={config.sr_strength_weight}
onChange={...}
/>
<Input
label="Distance Penalty Factor"
type="number"
min={0}
max={1}
step={0.01}
value={config.distance_penalty_factor}
onChange={...}
/>
</section>
<section>
<h3>Conflict Detection Thresholds</h3>
<Input
label="Momentum-Technical Divergence Threshold"
type="number"
min={0}
max={100}
value={config.momentum_technical_divergence_threshold}
onChange={...}
helpText="Points difference to flag momentum-technical conflict"
/>
<Input
label="Fundamental-Technical Divergence Threshold"
type="number"
min={0}
max={100}
value={config.fundamental_technical_divergence_threshold}
onChange={...}
/>
</section>
<Button type="submit">Save Configuration</Button>
<Button type="button" onClick={handleReset}>Reset to Defaults</Button>
</form>
```
**Validation**:
- Client-side validation before submission
- Toast notification on success/error
- Confirmation dialog for reset to defaults
## Database Schema Changes
### Migration Strategy
**Alembic Migration**: `alembic revision -m "add_recommendation_fields_to_trade_setup"`
**Migration Script**:
```python
"""add_recommendation_fields_to_trade_setup
Revision ID: abc123def456
Revises: previous_revision
Create Date: 2024-01-15 10:00:00.000000
"""
from alembic import op
import sqlalchemy as sa
def upgrade():
# Add new columns to trade_setups table
op.add_column('trade_setups',
sa.Column('confidence_score', sa.Float(), nullable=True))
op.add_column('trade_setups',
sa.Column('targets_json', sa.Text(), nullable=True))
op.add_column('trade_setups',
sa.Column('conflict_flags_json', sa.Text(), nullable=True))
op.add_column('trade_setups',
sa.Column('recommended_action', sa.String(20), nullable=True))
op.add_column('trade_setups',
sa.Column('reasoning', sa.Text(), nullable=True))
op.add_column('trade_setups',
sa.Column('risk_level', sa.String(10), nullable=True))
def downgrade():
# Remove columns if rolling back
op.drop_column('trade_setups', 'risk_level')
op.drop_column('trade_setups', 'reasoning')
op.drop_column('trade_setups', 'recommended_action')
op.drop_column('trade_setups', 'conflict_flags_json')
op.drop_column('trade_setups', 'targets_json')
op.drop_column('trade_setups', 'confidence_score')
```
**Deployment Steps**:
1. Run migration: `alembic upgrade head`
2. Deploy new backend code with recommendation_service
3. Trigger R:R scanner job to populate recommendation data
4. Deploy frontend with new components
5. Verify data in admin panel
**Rollback Plan**:
- New fields are nullable, so old code continues to work
- If issues arise, run `alembic downgrade -1`
- Frontend gracefully handles missing recommendation fields
## Integration Points
### Integration with R:R Scanner Service
**Modified `rr_scanner_service.scan_ticker()`**:
```python
async def scan_ticker(
db: AsyncSession,
symbol: str,
rr_threshold: float = 1.5,
atr_multiplier: float = 1.5,
) -> list[TradeSetup]:
"""Scan a single ticker for trade setups with recommendations."""
# ... existing code to generate base setups ...
# NEW: Fetch data needed for recommendations
dimension_scores = await _get_dimension_scores(db, ticker.id)
sentiment_classification = await _get_latest_sentiment(db, ticker.id)
# NEW: Enhance each setup with recommendations
from app.services.recommendation_service import enhance_trade_setup
enhanced_setups = []
for setup in setups:
enhanced = await enhance_trade_setup(
db=db,
ticker=ticker,
setup=setup,
dimension_scores=dimension_scores,
sr_levels=sr_levels,
sentiment_classification=sentiment_classification,
atr_value=atr_value,
)
enhanced_setups.append(enhanced)
# Delete old setups and persist enhanced ones
await db.execute(
delete(TradeSetup).where(TradeSetup.ticker_id == ticker.id)
)
for setup in enhanced_setups:
db.add(setup)
await db.commit()
for s in enhanced_setups:
await db.refresh(s)
return enhanced_setups
```
**Helper Functions**:
```python
async def _get_dimension_scores(db: AsyncSession, ticker_id: int) -> dict[str, float]:
"""Fetch all dimension scores for a ticker."""
result = await db.execute(
select(DimensionScore).where(DimensionScore.ticker_id == ticker_id)
)
scores = {ds.dimension: ds.score for ds in result.scalars().all()}
return scores
async def _get_latest_sentiment(db: AsyncSession, ticker_id: int) -> str | None:
"""Fetch the most recent sentiment classification."""
result = await db.execute(
select(SentimentScore)
.where(SentimentScore.ticker_id == ticker_id)
.order_by(SentimentScore.timestamp.desc())
.limit(1)
)
sentiment = result.scalar_one_or_none()
return sentiment.classification if sentiment else None
```
### Integration with Scoring Service
**Data Dependencies**:
- `DimensionScore` table: technical, sentiment, momentum, fundamental scores
- Accessed via `_get_dimension_scores()` helper
- No modifications to scoring_service required
**Staleness Handling**:
- Recommendation generation uses current dimension scores
- If dimension scores are stale, recommendations reflect that uncertainty
- Consider triggering score recomputation before R:R scan in scheduler
### Integration with S/R Service
**Data Dependencies**:
- `SRLevel` table: price_level, type, strength, detection_method
- Already fetched in `rr_scanner_service.scan_ticker()`
- No modifications to sr_service required
**Usage**:
- Target generation filters S/R levels by type and direction
- Probability estimation uses strength scores
- Quality scoring combines strength with R:R and proximity
### Integration with Sentiment Service
**Data Dependencies**:
- `SentimentScore` table: classification, confidence, timestamp
- Accessed via `_get_latest_sentiment()` helper
- No modifications to sentiment_service required
**Usage**:
- Conflict detection compares sentiment with technical/momentum
- Confidence scoring adds/subtracts points based on sentiment alignment
- Probability estimation adjusts for sentiment support
### Integration with Indicator Service
**Data Dependencies**:
- ATR calculation already performed in rr_scanner_service
- No additional calls needed
**Usage**:
- Target generation uses ATR for volatility filtering
- Probability estimation uses distance_atr_multiple
## Correctness Properties
A property is a characteristic or behavior that should hold true across all valid executions of a system—essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.
### Property Reflection
After analyzing all acceptance criteria, I identified several areas of redundancy:
**Redundancy Group 1: Confidence Score Range Validation**
- Properties 2.1 and 2.2 both test that confidence scores are 0-100%
- **Resolution**: Combine into single property testing both LONG and SHORT
**Redundancy Group 2: Target Count and Direction**
- Properties 3.1 and 3.2 both test target count (3-5) and direction relationship
- **Resolution**: Combine into single property covering both directions
**Redundancy Group 3: Probability Range by Classification**
- Properties 4.6, 4.7, 4.8 all test probability ranges for different classifications
- **Resolution**: Combine into single property testing all classifications
**Redundancy Group 4: Schema Validation**
- Properties 15.1-15.6 all test individual field existence
- **Resolution**: Combine into single property testing complete schema
**Redundancy Group 5: API Response Schema**
- Properties 7.2-7.6 all test individual response fields
- **Resolution**: Combine into single property testing complete response schema
**Redundancy Group 6: S/R Strength Impact**
- Properties 8.2 and 8.3 both test strength score impact on probability
- **Resolution**: Combine into single monotonicity property
After reflection, 60+ criteria reduce to 35 unique properties.
### Property 1: Bidirectional Setup Generation
For any ticker with sufficient OHLCV data and S/R levels, the recommendation engine shall generate exactly two trade setups: one LONG and one SHORT, each with distinct direction fields.
**Validates: Requirements 1.1, 1.5**
### Property 2: Direction-Appropriate S/R Level Usage
For any LONG setup, all targets shall be resistance levels above entry price. For any SHORT setup, all targets shall be support levels below entry price.
**Validates: Requirements 1.3, 1.4**
### Property 3: Confidence Score Bounds
For any trade setup (LONG or SHORT), the confidence score shall be within the range [0, 100].
**Validates: Requirements 2.1, 2.2**
### Property 4: Conflict Impact on Confidence
For any trade setup, when signal conflicts are detected, the confidence score shall be reduced compared to the same setup without conflicts.
**Validates: Requirements 2.5, 5.7**
### Property 5: Confidence Persistence
For any generated trade setup, the confidence_score field shall be populated in the TradeSetup model.
**Validates: Requirements 2.6**
### Property 6: Target Count and Direction
For any trade setup, the targets array shall contain 3 to 5 targets, all positioned in the correct direction relative to entry (above for LONG, below for SHORT).
**Validates: Requirements 3.1, 3.2, 7.3**
### Property 7: Target Classification Correctness
For any targets array, targets shall be classified such that Conservative targets are nearest to entry, Aggressive targets are furthest, and Moderate targets are in between, based on distance ordering.
**Validates: Requirements 3.3**
### Property 8: R:R Ratio Calculation
For any target in a trade setup, the R:R ratio shall equal (abs(target_price - entry_price)) / (abs(entry_price - stop_loss)).
**Validates: Requirements 3.4**
### Property 9: Target Distance Ordering
For any targets array, targets shall be ordered by increasing distance from entry price.
**Validates: Requirements 3.6**
### Property 10: Probability Bounds
For any target, the probability percentage shall be within the range [10, 90] to reflect market uncertainty.
**Validates: Requirements 4.1**
### Property 11: S/R Strength Monotonicity
For any two targets at the same distance with different S/R strength scores, the target with higher strength shall have equal or higher probability.
**Validates: Requirements 4.2, 8.2, 8.3**
### Property 12: Distance Probability Relationship
For any two targets with the same S/R strength, the target closer to entry shall have higher probability than the target further from entry.
**Validates: Requirements 4.3**
### Property 13: Signal Alignment Impact
For any target, when signals are aligned with the trade direction, the probability shall be higher than when signals are not aligned, all other factors being equal.
**Validates: Requirements 4.4**
### Property 14: Probability Classification Ranges
For any Conservative target, probability shall be above 60%. For any Moderate target, probability shall be between 40% and 70%. For any Aggressive target, probability shall be below 50%.
**Validates: Requirements 4.6, 4.7, 4.8**
### Property 15: Sentiment-Technical Conflict Detection
For any ticker, when sentiment is bearish and technical score is above 60, OR when sentiment is bullish and technical score is below 40, a sentiment-technical conflict shall be flagged.
**Validates: Requirements 5.1, 5.2, 5.3**
### Property 16: Momentum-Technical Divergence Detection
For any ticker, when the absolute difference between momentum score and technical score exceeds 30 points, a momentum-technical conflict shall be flagged.
**Validates: Requirements 5.4, 5.5**
### Property 17: Conflict Persistence
For any trade setup with detected conflicts, the conflict_flags array shall contain descriptions of all detected conflicts.
**Validates: Requirements 5.6**
### Property 18: Recommended Action Validity
For any trade setup, the recommended_action field shall be one of: "LONG_HIGH", "LONG_MODERATE", "SHORT_HIGH", "SHORT_MODERATE", or "NEUTRAL".
**Validates: Requirements 6.1**
### Property 19: Reasoning Presence
For any trade setup, the reasoning field shall be populated with non-empty text explaining the recommendation.
**Validates: Requirements 6.5**
### Property 20: Risk Level Validity
For any trade setup, the risk_level field shall be one of: "Low", "Medium", or "High".
**Validates: Requirements 6.6**
### Property 21: Composite Score Inclusion
For any trade setup, the composite_score field shall be populated with the ticker's composite score.
**Validates: Requirements 6.7**
### Property 22: API Bidirectional Response
For any ticker with trade setups, the API endpoint shall return both LONG and SHORT setups.
**Validates: Requirements 7.1**
### Property 23: API Response Schema Completeness
For any trade setup returned by the API, the response shall include: confidence_score, targets array, conflict_flags array, recommended_action, reasoning, risk_level, and composite_score fields.
**Validates: Requirements 7.2, 7.3, 7.5, 7.6**
### Property 24: API Target Object Schema
For any target object in the API response, it shall include: price, distance_from_entry, rr_ratio, probability, and classification fields.
**Validates: Requirements 7.4**
### Property 25: API Response Ordering
For any API response containing multiple setups, setups shall be ordered by confidence score in descending order.
**Validates: Requirements 7.7**
### Property 26: S/R Strength Retrieval
For any target generated, the S/R strength score shall be correctly retrieved from the SRLevel model and included in the target object.
**Validates: Requirements 8.1**
### Property 27: Strength Score Normalization
For any S/R strength score used in probability calculation, it shall be normalized to the range [0, 1] before application.
**Validates: Requirements 8.4**
### Property 28: ATR Retrieval
For any ticker being analyzed, the current ATR value shall be retrieved and used in target generation.
**Validates: Requirements 9.1**
### Property 29: High Volatility Target Inclusion
For any ticker where ATR exceeds 5% of current price, the target generator shall include S/R levels up to 10x ATR distance as valid targets.
**Validates: Requirements 9.2**
### Property 30: Low Volatility Target Restriction
For any ticker where ATR is below 2% of current price, the target generator shall limit targets to S/R levels within 3x ATR distance.
**Validates: Requirements 9.3**
### Property 31: ATR Multiple Calculation
For any target, the distance_atr_multiple field shall equal (abs(target_price - entry_price)) / ATR.
**Validates: Requirements 9.4**
### Property 32: Minimum Distance Filter
For any generated targets, no target shall be closer than 1x ATR from the entry price.
**Validates: Requirements 9.5**
### Property 33: Timestamp Presence
For any generated trade setup, the detected_at field shall be populated with a timestamp.
**Validates: Requirements 10.1**
### Property 34: Single Ticker Performance
For any single ticker recommendation generation, the operation shall complete within 500 milliseconds.
**Validates: Requirements 14.1**
### Property 35: Batch Processing Resilience
For any batch of tickers, if recommendation generation fails for one ticker, the engine shall continue processing remaining tickers without stopping.
**Validates: Requirements 14.5**
### Property 36: Dimension Score Query Efficiency
For any ticker recommendation generation, all required dimension scores shall be retrieved in a single database query.
**Validates: Requirements 14.3**
### Property 37: TradeSetup Model Schema
The TradeSetup model shall include all required fields: confidence_score (Float), targets_json (Text), conflict_flags_json (Text), recommended_action (String), reasoning (Text), and risk_level (String).
**Validates: Requirements 15.1, 15.2, 15.3, 15.4, 15.5, 15.6**
### Property 38: Backward Compatibility
For any trade setup, the existing fields (entry_price, stop_loss, target, rr_ratio) shall remain populated with the primary target data for backward compatibility.
**Validates: Requirements 15.7**
## Error Handling
### Service-Level Error Handling
**recommendation_service.py**:
```python
async def enhance_trade_setup(...) -> TradeSetup:
"""Enhance trade setup with error handling."""
try:
# Calculate confidence
confidence = direction_analyzer.calculate_confidence(...)
# Generate targets
targets = target_generator.generate_targets(...)
# Estimate probabilities
for target in targets:
target["probability"] = probability_estimator.estimate_probability(...)
# Detect conflicts
conflicts = signal_conflict_detector.detect_conflicts(...)
# Generate recommendation summary
recommendation = _generate_recommendation_summary(...)
# Update setup model
setup.confidence_score = confidence
setup.targets_json = json.dumps(targets)
setup.conflict_flags_json = json.dumps(conflicts)
setup.recommended_action = recommendation["action"]
setup.reasoning = recommendation["reasoning"]
setup.risk_level = recommendation["risk_level"]
return setup
except Exception as e:
logger.exception(f"Error enhancing trade setup for {ticker.symbol}: {e}")
# Return setup with minimal recommendation data
setup.confidence_score = None
setup.reasoning = f"Recommendation generation failed: {str(e)}"
setup.risk_level = "High"
return setup
```
**Graceful Degradation**:
- If recommendation enhancement fails, return base setup without recommendation data
- Log error for debugging but don't fail the entire scan
- Set risk_level to "High" to warn users of incomplete analysis
### API-Level Error Handling
**Trade Setup Endpoints**:
```python
@router.get("/trades")
async def get_trade_setups(
direction: str | None = None,
min_confidence: float | None = None,
recommended_action: str | None = None,
db: AsyncSession = Depends(get_db),
):
"""Get trade setups with validation."""
try:
# Validate parameters
if min_confidence is not None and not (0 <= min_confidence <= 100):
raise ValidationError("min_confidence must be between 0 and 100")
if recommended_action is not None:
valid_actions = ["LONG_HIGH", "LONG_MODERATE", "SHORT_HIGH", "SHORT_MODERATE", "NEUTRAL"]
if recommended_action not in valid_actions:
raise ValidationError(f"recommended_action must be one of: {', '.join(valid_actions)}")
# Fetch and filter setups
setups = await rr_scanner_service.get_trade_setups(db, direction)
# Apply filters
if min_confidence is not None:
setups = [s for s in setups if s.get("confidence_score", 0) >= min_confidence]
if recommended_action is not None:
setups = [s for s in setups if s.get("recommended_action") == recommended_action]
return {"status": "success", "data": setups}
except ValidationError as e:
raise HTTPException(status_code=400, detail=str(e))
except Exception as e:
logger.exception(f"Error fetching trade setups: {e}")
raise HTTPException(status_code=500, detail="Internal server error")
```
**Admin Configuration Endpoints**:
```python
@router.put("/admin/settings/recommendations")
async def update_recommendation_config(
config: RecommendationConfigUpdate,
db: AsyncSession = Depends(get_db),
_: User = Depends(require_admin),
):
"""Update recommendation configuration with validation."""
try:
# Validate thresholds (0-100)
for key in ["high_confidence_threshold", "moderate_confidence_threshold", "confidence_diff_threshold"]:
if hasattr(config, key):
value = getattr(config, key)
if value is not None and not (0 <= value <= 100):
raise ValidationError(f"{key} must be between 0 and 100")
# Validate weights (0-1)
for key in ["signal_alignment_weight", "sr_strength_weight", "distance_penalty_factor"]:
if hasattr(config, key):
value = getattr(config, key)
if value is not None and not (0 <= value <= 1):
raise ValidationError(f"{key} must be between 0 and 1")
# Update settings
updated = await settings_service.update_recommendation_config(db, config.dict(exclude_unset=True))
return {"status": "success", "data": updated}
except ValidationError as e:
raise HTTPException(status_code=400, detail=str(e))
except Exception as e:
logger.exception(f"Error updating recommendation config: {e}")
raise HTTPException(status_code=500, detail="Internal server error")
```
### Data Validation
**JSON Field Validation**:
- Validate targets_json structure before parsing
- Handle malformed JSON gracefully
- Provide default empty arrays for missing data
**Null Handling**:
- All new TradeSetup fields are nullable
- Frontend checks for null before rendering
- API returns null fields explicitly (not omitted)
## Testing Strategy
### Dual Testing Approach
This feature requires both unit tests and property-based tests for comprehensive coverage:
**Unit Tests**: Verify specific examples, edge cases, and integration points
**Property Tests**: Verify universal properties across all inputs using randomization
Together, these approaches ensure both concrete correctness (unit tests) and general correctness (property tests).
### Property-Based Testing
**Framework**: Hypothesis (Python property-based testing library)
**Configuration**: Each property test shall run minimum 100 iterations to ensure comprehensive input coverage.
**Test Organization**: `tests/property/test_recommendation_properties.py`
**Example Property Test**:
```python
from hypothesis import given, strategies as st
import pytest
@given(
technical=st.floats(min_value=0, max_value=100),
momentum=st.floats(min_value=0, max_value=100),
sentiment=st.sampled_from(["bearish", "neutral", "bullish", None]),
)
@pytest.mark.property
def test_confidence_score_bounds(technical, momentum, sentiment):
"""Feature: intelligent-trade-recommendations, Property 3: Confidence Score Bounds
For any trade setup (LONG or SHORT), the confidence score shall be
within the range [0, 100].
"""
from app.services.recommendation_service import direction_analyzer
dimension_scores = {
"technical": technical,
"momentum": momentum,
"fundamental": 50.0,
}
# Test LONG direction
long_confidence = direction_analyzer.calculate_confidence(
direction="long",
dimension_scores=dimension_scores,
sentiment_classification=sentiment,
conflicts=[],
)
assert 0 <= long_confidence <= 100, f"LONG confidence {long_confidence} out of bounds"
# Test SHORT direction
short_confidence = direction_analyzer.calculate_confidence(
direction="short",
dimension_scores=dimension_scores,
sentiment_classification=sentiment,
conflicts=[],
)
assert 0 <= short_confidence <= 100, f"SHORT confidence {short_confidence} out of bounds"
```
**Property Test Tags**: Each test includes a comment with format:
```python
"""Feature: intelligent-trade-recommendations, Property {N}: {Property Title}
{Property description from design document}
"""
```
### Unit Testing
**Test Organization**: `tests/unit/test_recommendation_service.py`
**Unit Test Focus**:
- Specific examples from requirements (e.g., bullish sentiment + high technical = high LONG confidence)
- Edge cases (e.g., fewer than 3 S/R levels available)
- Integration points (e.g., R:R scanner calling recommendation service)
- Error conditions (e.g., missing dimension scores, malformed data)
**Example Unit Tests**:
```python
import pytest
from app.services.recommendation_service import direction_analyzer
def test_high_confidence_long_example():
"""Feature: intelligent-trade-recommendations, Requirement 2.3
WHEN sentiment is bullish AND technical score is above 60 AND momentum
score is above 60, THE Direction_Analyzer SHALL assign LONG confidence
above 70%.
"""
dimension_scores = {
"technical": 75.0,
"momentum": 68.0,
"fundamental": 50.0,
}
confidence = direction_analyzer.calculate_confidence(
direction="long",
dimension_scores=dimension_scores,
sentiment_classification="bullish",
conflicts=[],
)
assert confidence > 70.0, f"Expected LONG confidence > 70%, got {confidence}"
def test_high_confidence_short_example():
"""Feature: intelligent-trade-recommendations, Requirement 2.4
WHEN sentiment is bearish AND technical score is below 40 AND momentum
score is below 40, THE Direction_Analyzer SHALL assign SHORT confidence
above 70%.
"""
dimension_scores = {
"technical": 32.0,
"momentum": 35.0,
"fundamental": 50.0,
}
confidence = direction_analyzer.calculate_confidence(
direction="short",
dimension_scores=dimension_scores,
sentiment_classification="bearish",
conflicts=[],
)
assert confidence > 70.0, f"Expected SHORT confidence > 70%, got {confidence}"
def test_limited_targets_edge_case():
"""Feature: intelligent-trade-recommendations, Requirement 3.5
WHEN fewer than 3 S/R levels exist in the target direction, THE
Target_Generator SHALL use the available levels and flag the setup
as having limited targets.
"""
from app.services.recommendation_service import target_generator
# Only 2 resistance levels available
sr_levels = [
SRLevel(price_level=150.0, type="resistance", strength=70),
SRLevel(price_level=155.0, type="resistance", strength=65),
]
targets = target_generator.generate_targets(
direction="long",
entry_price=145.0,
stop_loss=142.0,
sr_levels=sr_levels,
atr_value=2.0,
)
assert len(targets) == 2, f"Expected 2 targets, got {len(targets)}"
# Check for limited targets flag in reasoning or metadata
```
**Test Fixtures** (`tests/conftest.py`):
```python
@pytest.fixture
def sample_dimension_scores():
"""Sample dimension scores for testing."""
return {
"technical": 65.0,
"sr_quality": 70.0,
"sentiment": 60.0,
"fundamental": 55.0,
"momentum": 62.0,
}
@pytest.fixture
def sample_sr_levels():
"""Sample S/R levels for testing."""
return [
SRLevel(id=1, price_level=140.0, type="support", strength=75),
SRLevel(id=2, price_level=145.0, type="support", strength=80),
SRLevel(id=3, price_level=155.0, type="resistance", strength=70),
SRLevel(id=4, price_level=160.0, type="resistance", strength=65),
SRLevel(id=5, price_level=165.0, type="resistance", strength=60),
]
```
### Frontend Testing
**Framework**: Vitest with React Testing Library
**Test Organization**: `frontend/src/components/**/*.test.tsx`
**Component Tests**:
```typescript
import { describe, it, expect } from 'vitest';
import { render, screen } from '@testing-library/react';
import { RecommendationPanel } from './RecommendationPanel';
describe('RecommendationPanel', () => {
it('displays LONG and SHORT setups side-by-side', () => {
const longSetup = {
direction: 'long',
confidence_score: 75.0,
recommended_action: 'LONG_HIGH',
// ... other fields
};
const shortSetup = {
direction: 'short',
confidence_score: 45.0,
recommended_action: 'SHORT_MODERATE',
// ... other fields
};
render(
<RecommendationPanel
symbol="AAPL"
longSetup={longSetup}
shortSetup={shortSetup}
/>
);
expect(screen.getByText(/LONG/i)).toBeInTheDocument();
expect(screen.getByText(/SHORT/i)).toBeInTheDocument();
expect(screen.getByText('75.0')).toBeInTheDocument();
expect(screen.getByText('45.0')).toBeInTheDocument();
});
it('highlights recommended direction with visual emphasis', () => {
const longSetup = {
direction: 'long',
confidence_score: 75.0,
recommended_action: 'LONG_HIGH',
// ... other fields
};
const shortSetup = {
direction: 'short',
confidence_score: 45.0,
recommended_action: 'SHORT_MODERATE',
// ... other fields
};
const { container } = render(
<RecommendationPanel
symbol="AAPL"
longSetup={longSetup}
shortSetup={shortSetup}
/>
);
// LONG should have recommended styling
const longCard = container.querySelector('[data-direction="long"]');
expect(longCard).toHaveClass('border-green-500');
// SHORT should have muted styling
const shortCard = container.querySelector('[data-direction="short"]');
expect(shortCard).toHaveClass('opacity-60');
});
it('displays conflict warnings when present', () => {
const setupWithConflicts = {
direction: 'long',
confidence_score: 55.0,
conflict_flags: [
'sentiment-technical: Bearish sentiment conflicts with bullish technical (72)',
],
// ... other fields
};
render(
<RecommendationPanel
symbol="AAPL"
longSetup={setupWithConflicts}
/>
);
expect(screen.getByText(/Bearish sentiment conflicts/i)).toBeInTheDocument();
});
});
```
### Integration Testing
**End-to-End Flow Test**:
```python
@pytest.mark.asyncio
async def test_recommendation_generation_e2e(db_session, sample_ticker):
"""Test complete recommendation generation flow."""
from app.services.rr_scanner_service import scan_ticker
# Setup: Ensure ticker has all required data
# - OHLCV records
# - Dimension scores
# - S/R levels
# - Sentiment scores
# Execute: Run scanner with recommendation enhancement
setups = await scan_ticker(
db=db_session,
symbol=sample_ticker.symbol,
rr_threshold=1.5,
atr_multiplier=1.5,
)
# Verify: Both LONG and SHORT setups generated
assert len(setups) == 2
long_setup = next(s for s in setups if s.direction == "long")
short_setup = next(s for s in setups if s.direction == "short")
# Verify: Recommendation fields populated
assert long_setup.confidence_score is not None
assert long_setup.targets_json is not None
assert long_setup.recommended_action is not None
assert long_setup.reasoning is not None
assert long_setup.risk_level is not None
# Verify: Targets structure
targets = json.loads(long_setup.targets_json)
assert 3 <= len(targets) <= 5
for target in targets:
assert "price" in target
assert "probability" in target
assert "classification" in target
```
### Performance Testing
**Benchmark Tests**:
```python
import time
import pytest
@pytest.mark.benchmark
@pytest.mark.asyncio
async def test_single_ticker_performance(db_session, sample_ticker):
"""Feature: intelligent-trade-recommendations, Property 34
For any single ticker recommendation generation, the operation shall
complete within 500 milliseconds.
"""
from app.services.rr_scanner_service import scan_ticker
start = time.time()
await scan_ticker(db=db_session, symbol=sample_ticker.symbol)
elapsed = (time.time() - start) * 1000 # Convert to ms
assert elapsed < 500, f"Recommendation generation took {elapsed}ms, expected < 500ms"
@pytest.mark.benchmark
@pytest.mark.asyncio
async def test_batch_processing_throughput(db_session, sample_tickers):
"""Feature: intelligent-trade-recommendations, Property 14.2
WHEN the scheduled job generates recommendations for all tickers, THE
Trade_Recommendation_Engine SHALL process at least 10 tickers per second.
"""
from app.services.rr_scanner_service import scan_all_tickers
start = time.time()
await scan_all_tickers(db=db_session)
elapsed = time.time() - start
throughput = len(sample_tickers) / elapsed
assert throughput >= 10, f"Throughput {throughput:.2f} tickers/sec, expected >= 10"
```
### Test Coverage Goals
- **Unit Tests**: 80%+ code coverage for recommendation_service
- **Property Tests**: 100% coverage of all 38 correctness properties
- **Integration Tests**: Complete E2E flow from scanner to API response
- **Frontend Tests**: 70%+ coverage for recommendation components
- **Performance Tests**: Verify both single-ticker and batch performance targets
## Implementation Roadmap
### Phase 1: Backend Core (Week 1)
1. Create Alembic migration for TradeSetup model extensions
2. Implement direction_analyzer module with confidence calculation
3. Implement signal_conflict_detector module
4. Write unit tests for confidence scoring and conflict detection
### Phase 2: Target Generation (Week 1-2)
1. Implement target_generator module with quality scoring
2. Implement probability_estimator module
3. Add volatility-based filtering logic
4. Write unit tests and property tests for target generation
### Phase 3: Integration (Week 2)
1. Create recommendation_service orchestrator
2. Integrate with rr_scanner_service
3. Add SystemSetting configuration support
4. Write integration tests for complete flow
### Phase 4: API Layer (Week 2-3)
1. Extend trade setup endpoints with filtering
2. Create admin configuration endpoints
3. Update Pydantic schemas for responses
4. Write API tests
### Phase 5: Frontend (Week 3)
1. Create RecommendationPanel component
2. Enhance Scanner table with new columns
3. Add RecommendationSettings admin page
4. Write component tests
### Phase 6: Testing & Optimization (Week 3-4)
1. Implement all 38 property-based tests
2. Run performance benchmarks
3. Optimize database queries if needed
4. Complete integration testing
### Phase 7: Deployment (Week 4)
1. Run migration on staging database
2. Deploy backend to staging
3. Deploy frontend to staging
4. User acceptance testing
5. Production deployment
## Success Metrics
- **Correctness**: All 38 properties pass with 100 iterations
- **Performance**: Single ticker < 500ms, batch >= 10 tickers/sec
- **Coverage**: 80%+ backend unit test coverage, 70%+ frontend coverage
- **User Adoption**: 80%+ of users interact with recommendation features within first month
- **Accuracy**: Track recommendation outcomes for future validation (Phase 2 feature)