[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"workflow-asset-8d12bfc9":3,"seo:featured-workflow:8d12bfc9-4ddd-11f1-9bc6-00163e2b0d79:en":86,"workflow-related-asset-8d12bfc9-8d12bfc9-4ddd-11f1-9bc6-00163e2b0d79":87},{"id":4,"uuid":5,"slug":6,"title":7,"description":8,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":12,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":14,"tags":21,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":28,"asset_kind":29,"target_tools":30,"install_mode":34,"entrypoint":18,"risk_profile":35,"dependencies":37,"verification":46,"agent_metadata":49,"agent_fit":62,"trust":74,"provenance":83,"created_at":85,"updated_at":85},3246,"8d12bfc9-4ddd-11f1-9bc6-00163e2b0d79","asset-8d12bfc9","MMPose — OpenMMLab Pose Estimation Toolbox","MMPose provides a modular framework for 2D and 3D pose estimation covering human body, hand, face, and animal keypoint detection with 30+ state-of-the-art methods.","8a911193-3180-11f1-9bc6-00163e2b0d79","AI Open Source","",0,"en",[15],{"id":16,"step_order":17,"title":18,"description":11,"prompt_template":19,"variables":11,"depends_on":20,"expected_output":11},3809,1,"MMPose Estimation","# MMPose — OpenMMLab Pose Estimation Toolbox\n\n## Quick Use\n```bash\npip install mmpose mmengine mmcv mmdet\npython demo\u002Ftopdown_demo_with_mmdet.py \n    demo\u002Fmmdetection_cfg\u002Frtmdet_m_640-8xb32_coco-person.py \n    https:\u002F\u002Fdownload.openmmlab.com\u002Fmmpose\u002Fv1\u002Fprojects\u002Frtmposev2\u002Frtmpose-m_simcc-body7_pt-body7_420e-256x192.pth \n    --input demo\u002Fresources\u002Fdemo.jpg \n    --output-root vis_results\u002F\n```\n\n## Introduction\nMMPose is a comprehensive pose estimation toolbox from the OpenMMLab ecosystem. It supports diverse tasks from human body keypoints to hand gesture recognition and animal pose tracking, all through a consistent modular API backed by PyTorch.\n\n## What MMPose Does\n- Estimates 2D and 3D keypoints for human body, hands, face, and animals\n- Implements 30+ methods including HRNet, RTMPose, and ViTPose\n- Provides top-down and bottom-up pose estimation pipelines\n- Supports whole-body pose estimation combining body, hand, and face\n- Integrates with MMDetection for person detection before pose estimation\n\n## Architecture Overview\nMMPose follows a top-down or bottom-up paradigm. Top-down first detects each person with a bounding box (via MMDetection), then estimates keypoints within each box. Bottom-up detects all keypoints simultaneously and groups them by person. Both approaches use configurable backbones, heads, and codec modules managed by MMEngine.\n\n## Self-Hosting & Configuration\n- Install mmpose, mmengine, mmcv, and optionally mmdet via pip\n- Download model checkpoints from the MMPose model zoo\n- Use config files to select backbone, keypoint head, and dataset\n- Set input resolution to balance speed and accuracy\n- Deploy with MMDeploy for ONNX or TensorRT inference\n\n## Key Features\n- RTMPose models achieve real-time performance at high accuracy\n- Unified framework for body, hand, face, and animal keypoints\n- Extensive model zoo with pre-trained weights on COCO, MPII, and more\n- Modular codec system for keypoint encoding and decoding\n- Built-in visualization with skeleton overlay on images and video\n\n## Comparison with Similar Tools\n- **MediaPipe** — optimized for mobile and web but closed ecosystem; MMPose offers more research flexibility\n- **OpenPose** — pioneered real-time pose but is slower; RTMPose in MMPose is faster and more accurate\n- **Detectron2** — supports keypoint detection but with fewer pose-specific methods\n- **AlphaPose** — strong real-time performance but narrower scope than MMPose\n\n## FAQ\n**Q: Can MMPose track poses across video frames?**\nA: MMPose handles per-frame estimation. Combine with a tracker like ByteTrack for temporal tracking.\n\n**Q: Does it support 3D pose estimation?**\nA: Yes. MMPose includes 3D pose methods that lift 2D keypoints into 3D coordinates.\n\n**Q: What is RTMPose?**\nA: RTMPose is a real-time pose estimation model in MMPose that achieves state-of-the-art speed-accuracy tradeoffs.\n\n**Q: Can I train on custom keypoint definitions?**\nA: Yes. Define a custom dataset class with your keypoint schema and skeleton connectivity.\n\n## Sources\n- https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmpose\n- https:\u002F\u002Fmmpose.readthedocs.io\u002F","0",[22],{"id":23,"name":24,"slug":25,"icon":26},12,"Configs","config","⚙️",false,"bce279bc0d4990f7f3a77ff51b2ff300e456c1a23f843ec884f071b82fcb06ca","skill",[31,32,33],"claude_code","codex","gemini_cli","single",{"executes_code":27,"modifies_global_config":27,"requires_secrets":36,"uses_absolute_paths":27,"network_access":27},[],{"npm":38,"pip":39,"brew":44,"system":45},[],[40,41,42,43],"mmcv","mmdet","mmengine","mmpose",[],[],{"commands":47,"expected_files":48},[],[18],{"asset_kind":29,"target_tools":50,"install_mode":34,"entrypoint":18,"risk_profile":51,"dependencies":53,"content_hash":28,"verification":58,"inferred":61},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":52,"uses_absolute_paths":27,"network_access":27},[],{"npm":54,"pip":55,"brew":56,"system":57},[],[40,41,42,43],[],[],{"commands":59,"expected_files":60},[],[18],true,{"target":32,"score":63,"status":64,"policy":65,"why":66,"asset_kind":29,"install_mode":34},98,"native","allow",[67,68,69,70,71,72,73],"target_tools includes codex","asset_kind skill","install_mode single","markdown-only","policy allow","safe markdown-only Codex install","trust established",{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":28,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":77,"review_status":78,"signals":79},"established","hash_only",[],"unreviewed",[80,81,82],"author has published assets","content hash available","no dangerous capability badges",{"owner_uuid":9,"owner_name":10,"source_url":84,"content_hash":28,"visibility":17,"created_at":85,"updated_at":85},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-8d12bfc9","2026-05-12 16:35:37",null,[88,140,193,236],{"id":89,"uuid":90,"slug":91,"title":92,"description":93,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":94,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":95,"tags":96,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":98,"asset_kind":29,"target_tools":99,"install_mode":34,"entrypoint":100,"risk_profile":101,"dependencies":103,"verification":108,"agent_metadata":111,"agent_fit":123,"trust":125,"provenance":128,"created_at":130,"updated_at":131,"__relatedScore":132,"__relatedReasons":133,"__sharedTags":138},1941,"eeb52efe-3e25-11f1-9bc6-00163e2b0d79","mmdetection-open-source-object-detection-toolbox-pytorch-eeb52efe","MMDetection — Open-Source Object Detection Toolbox for PyTorch","MMDetection is an open-source object detection and instance segmentation toolbox from OpenMMLab, offering 300+ pretrained models and a modular config system built on PyTorch.",56,[],[97],{"id":23,"name":24,"slug":25,"icon":26},"96fa177a89f86df76229a8c24e570001346844b588b5d103d9364e8d5817aa08",[31,32,33],"MMDetection Object Detection",{"executes_code":27,"modifies_global_config":27,"requires_secrets":102,"uses_absolute_paths":27,"network_access":27},[],{"npm":104,"pip":105,"brew":106,"system":107},[],[],[],[],{"commands":109,"expected_files":110},[],[100],{"asset_kind":29,"target_tools":112,"install_mode":34,"entrypoint":100,"risk_profile":113,"dependencies":115,"content_hash":98,"verification":120},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":114,"uses_absolute_paths":27,"network_access":27},[],{"npm":116,"pip":117,"brew":118,"system":119},[],[],[],[],{"commands":121,"expected_files":122},[],[100],{"target":32,"score":63,"status":64,"policy":65,"why":124,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":98,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":126,"review_status":78,"signals":127},[],[80,81,82],{"owner_uuid":9,"owner_name":10,"source_url":129,"content_hash":98,"visibility":17,"created_at":130,"updated_at":131},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fmmdetection-open-source-object-detection-toolbox-pytorch-eeb52efe","2026-04-22 16:33:26","2026-05-12 18:23:46",97.63381228350873,[134,135,136,137],"topic-match","same-kind","same-target","same-author",[25,139],"configs",{"id":141,"uuid":142,"slug":143,"title":144,"description":145,"author_id":146,"author_name":147,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":148,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":149,"tags":150,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":156,"asset_kind":29,"target_tools":157,"install_mode":34,"entrypoint":158,"risk_profile":159,"dependencies":161,"verification":166,"agent_metadata":169,"agent_fit":181,"trust":183,"provenance":186,"created_at":188,"updated_at":189,"__relatedScore":190,"__relatedReasons":191,"__sharedTags":192},3249,"bca17f13-4ddd-11f1-9bc6-00163e2b0d79","asset-bca17f13","MMAction2 — OpenMMLab Video Understanding Toolbox","MMAction2 provides a modular framework for action recognition, temporal action detection, and spatial-temporal action detection with 20+ methods and support for major video benchmarks.","8a910e34-3180-11f1-9bc6-00163e2b0d79","Script Depot",2,[],[151],{"id":152,"name":153,"slug":154,"icon":155},11,"Scripts","script","📜","e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",[31,32,33],"SKILL.md",{"executes_code":27,"modifies_global_config":27,"requires_secrets":160,"uses_absolute_paths":27,"network_access":27},[],{"npm":162,"pip":163,"brew":164,"system":165},[],[],[],[],{"commands":167,"expected_files":168},[],[],{"asset_kind":29,"target_tools":170,"install_mode":34,"entrypoint":158,"risk_profile":171,"dependencies":173,"content_hash":156,"verification":178,"inferred":61},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":172,"uses_absolute_paths":27,"network_access":27},[],{"npm":174,"pip":175,"brew":176,"system":177},[],[],[],[],{"commands":179,"expected_files":180},[],[],{"target":32,"score":63,"status":64,"policy":65,"why":182,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":156,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":184,"review_status":78,"signals":185},[],[80,81,82],{"owner_uuid":146,"owner_name":147,"source_url":187,"content_hash":156,"visibility":17,"created_at":188,"updated_at":189},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-bca17f13","2026-05-12 16:36:57","2026-05-12 19:50:36",83.7156818820795,[134,135,136],[],{"id":194,"uuid":195,"slug":196,"title":197,"description":198,"author_id":146,"author_name":147,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":12,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":199,"tags":200,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":156,"asset_kind":29,"target_tools":202,"install_mode":34,"entrypoint":158,"risk_profile":203,"dependencies":205,"verification":210,"agent_metadata":213,"agent_fit":225,"trust":227,"provenance":230,"created_at":232,"updated_at":232,"__relatedScore":233,"__relatedReasons":234,"__sharedTags":235},3245,"7c63094f-4ddd-11f1-9bc6-00163e2b0d79","asset-7c63094f","MMSegmentation — OpenMMLab Semantic Segmentation Toolbox","MMSegmentation is a comprehensive semantic segmentation toolbox based on PyTorch, providing 50+ segmentation methods with a unified benchmark and modular design.",[],[201],{"id":152,"name":153,"slug":154,"icon":155},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":204,"uses_absolute_paths":27,"network_access":27},[],{"npm":206,"pip":207,"brew":208,"system":209},[],[],[],[],{"commands":211,"expected_files":212},[],[],{"asset_kind":29,"target_tools":214,"install_mode":34,"entrypoint":158,"risk_profile":215,"dependencies":217,"content_hash":156,"verification":222,"inferred":61},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":216,"uses_absolute_paths":27,"network_access":27},[],{"npm":218,"pip":219,"brew":220,"system":221},[],[],[],[],{"commands":223,"expected_files":224},[],[],{"target":32,"score":63,"status":64,"policy":65,"why":226,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":156,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":228,"review_status":78,"signals":229},[],[80,81,82],{"owner_uuid":146,"owner_name":147,"source_url":231,"content_hash":156,"visibility":17,"created_at":232,"updated_at":232},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-7c63094f","2026-05-12 16:35:09",80,[134,135,136],[],{"id":237,"uuid":238,"slug":239,"title":240,"description":241,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":242,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":243,"tags":244,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":246,"asset_kind":29,"target_tools":247,"install_mode":34,"entrypoint":248,"risk_profile":249,"dependencies":251,"verification":256,"agent_metadata":259,"agent_fit":271,"trust":273,"provenance":276,"created_at":278,"updated_at":279,"__relatedScore":280,"__relatedReasons":281,"__sharedTags":282},2049,"0cc624cb-3fb6-11f1-9bc6-00163e2b0d79","semantic-ui-human-friendly-ui-component-framework-0cc624cb","Semantic UI — Human-Friendly UI Component Framework","Semantic UI is a front-end component framework that uses human-friendly HTML class names to create responsive, themeable layouts. It provides over 50 UI components designed around natural language principles.",74,[],[245],{"id":23,"name":24,"slug":25,"icon":26},"5d728285be7d8bbab0bd0dccbe0be99994d97c0a4b3ddc85ec6068c18b79caf2",[31,32,33],"Semantic UI Overview",{"executes_code":27,"modifies_global_config":27,"requires_secrets":250,"uses_absolute_paths":27,"network_access":27},[],{"npm":252,"pip":253,"brew":254,"system":255},[],[],[],[],{"commands":257,"expected_files":258},[],[248],{"asset_kind":29,"target_tools":260,"install_mode":34,"entrypoint":248,"risk_profile":261,"dependencies":263,"content_hash":246,"verification":268},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":262,"uses_absolute_paths":27,"network_access":27},[],{"npm":264,"pip":265,"brew":266,"system":267},[],[],[],[],{"commands":269,"expected_files":270},[],[248],{"target":32,"score":63,"status":64,"policy":65,"why":272,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":246,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":274,"review_status":78,"signals":275},[],[80,81,82],{"owner_uuid":9,"owner_name":10,"source_url":277,"content_hash":246,"visibility":17,"created_at":278,"updated_at":279},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fsemantic-ui-human-friendly-ui-component-framework-0cc624cb","2026-04-24 16:17:35","2026-05-12 17:52:25",71.81259189508755,[135,136,137],[25,139]]