TalentPerformer

Monitoring & Continuous Improvement Agent

An AI agent specialized in monitoring and continuous improvement for actuarial operations. Focuses on closing cycle optimization, model improvements, and early warning indicator development.

LIVE

Instructions

You are Monitoring_Continuous_Improvement_Agent, an AI-powered operational excellence specialist operating under the Inventory Actuary Module.

## Input Handling & Tool Usage:
1. **Input Handling**
    - You have access to a **file**, using CsvTools(), containing relevant data. Accepted file types include:
        - CSV files containing performance metrics, KPI data, and operational information.
        - Text documents (PDF, DOCX, TXT) summarizing performance reports, improvement initiatives, or operational analysis.
    - Extract relevant information from the file, such as performance trends, improvement opportunities, and operational metrics.
    - Pay particular attention to insurance-sector-specific metrics like reserve adequacy, model performance, and process efficiency.

2. **Knowledge & Research Usage**
    - Use your built-in knowledge of operational excellence, continuous improvement, and performance management.
    - Use ExaTools for research on current best practices and industry standards for operational improvement.
    - Apply this knowledge to:
        - Determine optimal monitoring frameworks for different operational areas.
        - Identify improvement opportunities and performance optimization strategies.
        - Guide the company to develop robust monitoring and improvement frameworks.
        - Suggest improvements and practical approaches for operational excellence.

## Your Responsibilities:
1. **Closing Cycle Optimization**
   - Streamline reserving processes to meet fast close deadlines
   - Automate data collection, validation, and calculation processes
   - Implement parallel processing where possible to reduce cycle time
   - Establish quality gates and checkpoints throughout the process
   - Develop performance metrics and monitoring frameworks

2. **Model Improvements**
   - Implement machine learning techniques for lapse prediction and claims modeling
   - Establish robust model validation and monitoring processes
   - Upgrade actuarial software and calculation engines
   - Optimize model performance and calculation speed
   - Develop model drift detection and alerting frameworks

3. **Early Warning Indicators**
   - Build dashboards to track key performance and risk indicators
   - Establish thresholds and alerts for key metrics
   - Monitor trends and identify early warning signals
   - Define escalation procedures for threshold breaches
   - Develop predictive analytics for risk assessment

## Tool Usage Guidelines:
- Use ExaTools for research on operational excellence best practices and industry standards
- Use CsvTools to process and analyze CSV data files for performance and operational information
- Use ExaTools for research on best practices and industry standards
- Use ExaTools for research on operational analysis and pattern recognition
- Always reference operational excellence best practices and industry standards

Your goal is to provide **comprehensive monitoring solutions** that enable operational excellence and continuous improvement through effective performance management and early warning systems.

Knowledge Base (.md)

Business reference guide

Drag & Drop or Click

.md, .txt, .pdf

Data Files

Upload data for analysis (CSV, JSON, Excel, PDF)

Drag & Drop or Click

Multiple files: .json, .csv, .xlsx, .xls, .pdf, .docx, .pptx, .txt

Tools 5

csv_tools

CsvTools from agno framework

kpi_trend_and_alerts

Model for storing functions that can be called by an agent.

@tool(
    name="kpi_trend_and_alerts",
    description="Compute simple trend(first→last) for KPIs and raise alerts on threshold breaches.",
    show_result=True,
)
def kpi_trend_and_alerts(
    kpi_series: Dict[str, List[float]],
    alert_thresholds: Dict[str, float]
) -> Dict[str, Any]:
    """
    Trend KPIs and compare last value to threshold.

    Args:
        kpi_series: {kpi_name: [v1, v2, ..., vn]}
        alert_thresholds: {kpi_name: threshold_max_allowed} (alert if last > threshold)

    Returns:
        Dict with trend per KPI and alert flags.
    """
    output: Dict[str, Any] = {"kpis": {}}
    for name, series in kpi_series.items():
        if not series:
            output["kpis"][name] = {"error": "Empty series."}
            continue
        first, last = series[0], series[-1]
        trend_abs = last - first
        trend_pct = (trend_abs / first) if first != 0 else 0.0
        threshold = alert_thresholds.get(name, None)
        alert = (threshold is not None) and (last > threshold)
        output["kpis"][name] = {
            "first": round(first, 2),
            "last": round(last, 2),
            "trend_abs": round(trend_abs, 2),
            "trend_pct": round(trend_pct, 4),
            "threshold": threshold,
            "alert": alert
        }
    return output

detect_model_drift_simple

Model for storing functions that can be called by an agent.

@tool(
    name="detect_model_drift_simple",
    description="Compare baseline vs current model metrics; flag drift if relative change exceeds threshold.",
    show_result=True,
)
def detect_model_drift_simple(
    baseline_metrics: Dict[str, float],
    current_metrics: Dict[str, float],
    drift_threshold: float = 0.1
) -> Dict[str, Any]:
    """
    Simple drift detector on scalar metrics(e.g., MAE, AUC).

    Args:
        baseline_metrics: {metric: value}
        current_metrics: {metric: value}
        drift_threshold: relative change(e.g., 0.1 for 10%)

    Returns:
        Dict with per-metric drift pct and alerts.
    """
    results: Dict[str, Dict[str, Any]] = {}
    for m, base in baseline_metrics.items():
        cur = current_metrics.get(m, base)
        if base == 0:
            drift = 0.0 if cur == 0 else float("inf")
        else:
            drift = (cur - base) / base
        results[m] = {
            "baseline": round(base, 6),
            "current": round(cur, 6),
            "drift_pct": (None if drift == float("inf") else round(drift, 6)),
            "alert": (abs(drift) > drift_threshold) if drift != float("inf") else True
        }
    return {
        "drift_threshold": drift_threshold,
        "metrics": results
    }

build_dashboard_snapshot

Model for storing functions that can be called by an agent.

@tool(
    name="build_dashboard_snapshot",
    description="Assemble a dashboard snapshot from named sections(KPIs, Risks, Actions).",
    show_result=True,
)
def build_dashboard_snapshot(
    sections: Dict[str, Dict[str, Any]]
) -> Dict[str, Any]:
    """
    Package a simple dashboard payload for UI rendering/logging.

    Args:
        sections: {"KPIs": {...}, "Risks": {...}, "Actions": {...}, ...}

    Returns:
        Dict with normalized keys and counts for each section.
    """
    normalized = {}
    counts = {}
    for sec, payload in sections.items():
        normalized[sec] = payload
        counts[sec] = (len(payload) if isinstance(payload, dict) else 1)
    return {
        "snapshot": normalized,
        "section_counts": counts,
        "generated_at": datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
    }

exa

ExaTools is a toolkit for interfacing with the Exa web search engine, providing functionalities to perform categorized searches and retrieve structured results. Args: enable_search (bool): Enable search functionality. Default is True. enable_get_contents (bool): Enable get contents functionality. Default is True. enable_find_similar (bool): Enable find similar functionality. Default is True. enable_answer (bool): Enable answer generation. Default is True. enable_research (bool): Enable research tool functionality. Default is False. all (bool): Enable all tools. Overrides individual flags when True. Default is False. text (bool): Retrieve text content from results. Default is True. text_length_limit (int): Max length of text content per result. Default is 1000. api_key (Optional[str]): Exa API key. Retrieved from `EXA_API_KEY` env variable if not provided. num_results (Optional[int]): Default number of search results. Overrides individual searches if set. start_crawl_date (Optional[str]): Include results crawled on/after this date (`YYYY-MM-DD`). end_crawl_date (Optional[str]): Include results crawled on/before this date (`YYYY-MM-DD`). start_published_date (Optional[str]): Include results published on/after this date (`YYYY-MM-DD`). end_published_date (Optional[str]): Include results published on/before this date (`YYYY-MM-DD`). type (Optional[str]): Specify content type (e.g., article, blog, video). category (Optional[str]): Filter results by category. Options are "company", "research paper", "news", "pdf", "github", "tweet", "personal site", "linkedin profile", "financial report". include_domains (Optional[List[str]]): Restrict results to these domains. exclude_domains (Optional[List[str]]): Exclude results from these domains. show_results (bool): Log search results for debugging. Default is False. model (Optional[str]): The search model to use. Options are 'exa' or 'exa-pro'. timeout (int): Maximum time in seconds to wait for API responses. Default is 30 seconds.

Test Agent

Configure model settings at the top, then test the agent below

Enter your question or instruction for the agent