Skip to content

Sampling

Sampling allows the server to ask the MCP client’s LLM to analyze GitLab data — like getting a summary of a 50-comment merge request or diagnosing why a pipeline failed, without reading through hundreds of log lines.

Instead of returning raw data for the AI to process, the server collects GitLab data, sends it to the client’s LLM via the sampling/createMessage protocol method, and returns the analysis result.

sequenceDiagram
    participant U as User
    participant AI as AI Assistant
    participant S as MCP Server
    participant GL as GitLab API
    participant LLM as Client LLM

    U->>AI: "Summarize MR !42"
    AI->>S: gitlab_summarize_mr_review
    S->>GL: Fetch MR data, diffs, discussions
    GL-->>S: Raw data (potentially large)
    S->>LLM: sampling/createMessage with analysis prompt
    LLM-->>S: Structured analysis
    S-->>AI: Summary with key findings
    AI->>U: Presents concise summary
  1. Data collection — Server fetches relevant data from GitLab APIs
  2. Prompt construction — Data is formatted with a task-specific analysis prompt
  3. LLM analysis — The prompt is sent to the client’s LLM via sampling
  4. Result delivery — The analysis is returned as the tool result

11 sampling-powered tools are available:

ToolDescription
gitlab_analyze_mr_changesAnalyze merge request code changes for quality, bugs, and improvements
gitlab_review_mr_securitySecurity-focused review of merge request changes
gitlab_summarize_mr_reviewSummarize merge request discussions and review feedback
ToolDescription
gitlab_summarize_issueConcise summary of an issue with full context and discussion
gitlab_analyze_issue_scopeEstimate complexity and scope of an issue
ToolDescription
gitlab_analyze_pipeline_failureDiagnose why a pipeline failed with root cause analysis
gitlab_analyze_ci_configurationReview .gitlab-ci.yml for best practices and issues
gitlab_analyze_deployment_historyAnalyze deployment patterns and reliability
ToolDescription
gitlab_generate_release_notesAuto-generate release notes from milestone issues
gitlab_generate_milestone_reportSprint/milestone progress report with metrics
gitlab_find_technical_debtIdentify technical debt indicators in the project

Before sending any data to the LLM, the server strips sensitive credentials using regex pattern matching:

PatternExamples
GitLab tokensglpat-*, gloas-*, gldt-*
AWS credentialsAKIA*, AWS secret keys
Slack tokensxoxb-*, xoxp-*
JWTseyJ* (JSON Web Tokens)
Generic secretsPrivate keys, API keys matching common patterns

All matched patterns are replaced with [REDACTED] before the data reaches the LLM.

LayerProtection
Credential strippingRegex-based removal of tokens, keys, and secrets
Prompt injection preventionSystem prompt instructs LLM to ignore injection attempts
Size limitingInput data is truncated to prevent context overflow
Hardened system promptAnalysis-focused instructions that resist misuse

Sampling requires the MCP client to support the sampling capability. During initialization, the server checks for client support:

  • Supported: Claude Desktop, Claude Code
  • Not yet supported: VS Code Copilot, Cursor

When sampling is not available, the server returns a helpful message explaining that the tool requires a client with sampling support, and suggests using the underlying data-fetching tools directly.