LangSmith

    Evals and Testing

    LangChain (Mirrored)

    Collection thumbnail

    Recently added

    Run LLM evals with Jest and LangSmith
    Lesson 1

    Run LLM evals with Jest and LangSmith

    This lesson demonstrates how LangSmith enhances AI agent testing by integrating with Jest/Vitest, providing detailed metrics and traceability beyond standard pass/fail results. This streamlined approach improves developer experience and facilitates sharing comprehensive test results, including marketing copy scores and length analysis, with both technical and non-technical stakeholders.

    14mFeb 16, 2025
    Free
    Run LLM Evals with Pytest and LangSmith
    Lesson 1

    Run LLM Evals with Pytest and LangSmith

    This lesson shows how LangSmith enhances Pytest for debugging and evaluating Large Language Model (LLM) applications. By integrating LangSmith, developers gain detailed tracing, comprehensive logging, and streamlined result sharing for improved collaboration and more efficient LLM development.

    15mFeb 16, 2025
    Free

    All lessons

    Run LLM Evals with Pytest and LangSmith
    Lesson 1

    Run LLM Evals with Pytest and LangSmith

    This lesson shows how LangSmith enhances Pytest for debugging and evaluating Large Language Model (LLM) applications. By integrating LangSmith, developers gain detailed tracing, comprehensive logging, and streamlined result sharing for improved collaboration and more efficient LLM development.

    15mFeb 16, 2025
    Free
    Run LLM evals with Jest and LangSmith
    Lesson 1

    Run LLM evals with Jest and LangSmith

    This lesson demonstrates how LangSmith enhances AI agent testing by integrating with Jest/Vitest, providing detailed metrics and traceability beyond standard pass/fail results. This streamlined approach improves developer experience and facilitates sharing comprehensive test results, including marketing copy scores and length analysis, with both technical and non-technical stakeholders.

    14mFeb 16, 2025
    Free