包括的なWebアニメーションスキル。Framer Motion、CSSアニメーション、ページトランジション、ホバーインタラクション、スクロールアニメーション、ローディング状態(スピナー、スケルトン)、ドラッグ&ドロップ、モーダル/ダイアログトランジション、ボタンフィードバックなどマイクロインタラクション全般をカバー。
Version 3.1.0 | Last Updated: 2025-11-24
A comprehensive framework of specialized AI skills, MCP servers, and development tools for AI-assisted software development. Features automated validation, agent evaluation, and quality assurance.
Total: 199 Resources
| Category | Count | Description |
|---|---|---|
| Skills | 64 | Specialized AI methodologies and workflows |
| MCPs | 51 | Model Context Protocol servers (executable tools) |
| Tools | 4 | Core utility scripts |
| Components | 75 | Reusable UI and system components |
| Integrations | 5 | Third-party service connectors |
MCP Coverage: 79.7% (51 MCPs supporting 64 Skills)
Implement Eval-Driven Development (EDD) for continuous agent quality assurance:
# Run agent evaluations
node scripts/run-agent-evals.js --dataset tests/fixtures/golden-dataset-example.json --mock
npm run validate:quick # Fast feedback
npm run validate:full # Comprehensive checks
.claude/CLAUDE.md - Complete Claude Code configuration guideFINAL-RESOURCE-COUNTS.md - Resource tracking and metricsdocs/VALIDATION-SYSTEM.md - Validation methodology📖 New to this repository? Check out our Installation Guide and Quick Start Guide for step-by-step instructions.
# Clone the repository
git clone https://github.com/daffy0208/ai-dev-standards.git
cd ai-dev-standards
# Install dependencies
npm install
# Run validation to ensure everything works
npm run validate
# Clone as a reference
git clone https://github.com/daffy0208/ai-dev-standards.git ~/ai-dev-standards
# Reference skills and patterns in your .cursorrules or .claude/claude.md
# See docs/EXISTING-PROJECTS.md for integration guide
Open your project in Claude Code
Reference this repository in your project instructions:
You have access to ai-dev-standards at ~/ai-dev-standards
When needed, reference skills from skills/ and patterns from standards/
Use the skill-registry.json to find relevant skills for tasks
Claude will automatically discover and use appropriate skills
Think of this as a shared knowledge base between you and Claude:
Methodologies Claude follows automatically:
Executable tools that extend Claude's capabilities:
Proven approaches for complex systems:
# Quick validation (10-30 seconds)
npm run validate:quick
# Full validation (2-5 minutes)
npm run validate:full
# Agent evaluation only
node scripts/run-agent-evals.js --dataset tests/fixtures/golden-dataset-example.json --mock
Validates:
Test AI agents against golden datasets to ensure consistent, high-quality outputs:
{
"tests": [
{
"id": "T001",
"input": "Create a React button component with TypeScript",
"expected": "import React from 'react';",
"grading": { "type": "contains", "threshold": 0.8 }
}
]
}
Features:
docs/GETTING-STARTED.md, docs/QUICK-START.mdmeta/PROJECT-CONTEXT.md, meta/HOW-TO-USE.md.claude/CLAUDE.md, FINAL-RESOURCE-COUNTS.mddocs/VALIDATION-SYSTEM.md# Find skills for a task
grep -r "mvp" meta/skill-registry.json
# Search all resources
grep -r "authentication" meta/
# View resource counts
cat FINAL-RESOURCE-COUNTS.md
ai-dev-standards/
├── skills/ # 64 specialized methodologies
│ ├── mvp-builder/ # MVP development & prioritization
│ ├── rag-implementer/ # RAG system implementation
│ ├── api-designer/ # API design patterns
│ └── [61 more...]
│
├── mcp-servers/ # 51 executable tools
│ ├── semantic-search-mcp/ # Semantic code search
│ ├── vector-database-mcp/ # Vector DB integration
│ ├── code-quality-scanner-mcp/
│ └── [48 more...]
│
├── standards/ # Architecture & best practices
│ ├── architecture-patterns/
│ ├── best-practices/
│ ├── coding-conventions/
│ └── project-structure/
│
├── meta/ # Resource registry & context
│ ├── registry.json # Master resource registry
│ ├── skill-registry.json # Skill catalog
│ ├── mcp-registry.json # MCP catalog
│ └── PROJECT-CONTEXT.md # For AI assistants
│
├── docs/ # Comprehensive documentation
│ ├── GETTING-STARTED.md
│ ├── VALIDATION-SYSTEM.md
│ ├── AGENT-VALIDATION.md # NEW!
│ └── [40+ more guides...]
│
├── scripts/ # Automation & validation
│ ├── run-agent-evals.js # NEW! Agent evaluation
│ ├── validate-full.sh # Full validation suite
│ └── [20+ more scripts...]
│
└── tests/ # Test suites & fixtures
├── fixtures/
│ └── golden-dataset-example.json # NEW!
└── [150+ test files...]
User: "I want to build a SaaS product for invoice management"
Claude uses:
1. product-strategist → Validate problem-solution fit
2. mvp-builder → Identify P0 features (invoicing, payment tracking)
3. frontend-builder → React/Next.js structure
4. api-designer → REST API design
5. deployment-advisor → Vercel + Railway recommendation
6. security-engineer → Auth, data encryption, PCI compliance
User: "Add AI-powered search to our documentation"
Claude uses:
1. rag-implementer → RAG methodology
2. rag-pattern.md → Advanced RAG architecture
3. vector-database-mcp → Pinecone integration
4. embedding-generator-mcp → OpenAI embeddings
5. semantic-search-mcp → Search implementation
User: "Audit our codebase for quality issues"
Claude uses:
1. quality-auditor → Comprehensive audit methodology
2. code-quality-scanner-mcp → Static analysis
3. security-scanner-mcp → Vulnerability detection
4. performance-profiler-mcp → Performance bottlenecks
5. test-runner-mcp → Test coverage analysis
6. agent-evaluator → AI agent quality checks (NEW!)
# Search skills by keyword
grep -i "authentication" meta/skill-registry.json
grep -i "database" meta/skill-registry.json
grep -i "testing" meta/skill-registry.json
View meta/skill-registry.json for complete categorization:
Skills activate automatically based on your conversation with Claude. Just describe what you want to build!
npm run validate:quick
Checks:
Use when: Before commits, during rapid development
npm run validate:full
Checks:
Use when: Before pushing, in CI/CD, before releases
Test AI agents against golden datasets:
# Run with mock agent (for testing)
node scripts/run-agent-evals.js --dataset tests/fixtures/golden-dataset-example.json --mock
# Run with real agent (production)
node scripts/run-agent-evals.js --dataset tests/fixtures/golden-dataset-example.json
# Verbose output
node scripts/run-agent-evals.js --dataset tests/fixtures/golden-dataset-example.json --mock --verbose
Output:
📊 Summary
----------------------------------------
Total Tests: 10
Passed: 10
Failed: 0
Pass Rate: 100.0%
Avg Score: 0.96
Avg Latency: 47ms
----------------------------------------
✅ Agent Evaluations PASSED
See docs/VALIDATION-SYSTEM.md for complete methodology.
docs/QUICK-START.md - 5-minute quick startdocs/GETTING-STARTED.md - Comprehensive setup guidedocs/EXISTING-PROJECTS.md - Integration for existing projectsdocs/VALIDATION-SYSTEM.md - Validation methodologydocs/AGENT-VALIDATION.md - Agent evaluation guide (NEW!).claude/commands/validate.md - Validation command reference.claude/CLAUDE.md - Claude Code configuration (NEW!)FINAL-RESOURCE-COUNTS.md - Resource metrics (NEW!)meta/PROJECT-CONTEXT.md - For AI assistantsmeta/HOW-TO-USE.md - Navigation guideCONTRIBUTING.md - Contribution guidelinesdocs/MCP-DEVELOPMENT-ROADMAP.md - MCP development guidedocs/TROUBLESHOOTING.md - Common issues# Run all tests
npm test
# Run specific test suites
npm run test:unit # Unit tests only
npm run test:registry # Registry validation
npm run test:cli # CLI tests
# Run agent evaluations
npm run test:agent-eval # Agent evaluation suite
# Linting
npm run lint # Check code quality
npm run lint:fix # Auto-fix issues
# Type Checking
npm run typecheck # TypeScript validation
# Formatting
npm run format # Format code with Prettier
npm run format:check # Check formatting
# Registry
npm run validate:registries # Validate resource registries
npm run generate:registries # Regenerate registries
Create your own agent evaluation datasets:
{
"version": "1.0.0",
"description": "Your custom test dataset",
"tests": [
{
"id": "T001",
"category": "code-generation",
"description": "Test description",
"input": "Your test prompt",
"expected": "Expected output or pattern",
"grading": {
"type": "contains", // or "exact", "regex", "llm-graded"
"threshold": 0.8
},
"tags": ["category", "feature"]
}
]
}
v3.1.0 (2025-11-24): Agent Evaluation System
v3.0.3 (2025-11-14): Validation System
v2.1.0 (2025-10-29): Orchestration
v3.2.0: Enhanced Agent Evaluation
v3.3.0: MCP Expansion
v4.0.0: Ecosystem Integration
We welcome contributions! See CONTRIBUTING.md for guidelines.
# Clone the repository
git clone https://github.com/daffy0208/ai-dev-standards.git
cd ai-dev-standards
# Install dependencies
npm install
# Run validation
npm run validate:quick
# Make changes and test
npm test
# Submit PR
MIT License - see LICENSE for details
This repository synthesizes best practices from:
Maintained by: @daffy0208
docs/ directoryBuilt for excellence in AI-assisted development 🚀
互換性