[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"workflow-asset-8d12bfc9":3,"seo:featured-workflow:8d12bfc9-4ddd-11f1-9bc6-00163e2b0d79:fr":86,"workflow-related-asset-8d12bfc9-8d12bfc9-4ddd-11f1-9bc6-00163e2b0d79":87},{"id":4,"uuid":5,"slug":6,"title":7,"description":8,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":12,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":14,"tags":21,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":28,"asset_kind":29,"target_tools":30,"install_mode":34,"entrypoint":18,"risk_profile":35,"dependencies":37,"verification":46,"agent_metadata":49,"agent_fit":62,"trust":74,"provenance":83,"created_at":85,"updated_at":85},3246,"8d12bfc9-4ddd-11f1-9bc6-00163e2b0d79","asset-8d12bfc9","MMPose — OpenMMLab Pose Estimation Toolbox","MMPose provides a modular framework for 2D and 3D pose estimation covering human body, hand, face, and animal keypoint detection with 30+ state-of-the-art methods.","8a911193-3180-11f1-9bc6-00163e2b0d79","AI Open Source","",0,"en",[15],{"id":16,"step_order":17,"title":18,"description":11,"prompt_template":19,"variables":11,"depends_on":20,"expected_output":11},3809,1,"MMPose Estimation","# MMPose — OpenMMLab Pose Estimation Toolbox\n\n## Quick Use\n```bash\npip install mmpose mmengine mmcv mmdet\npython demo\u002Ftopdown_demo_with_mmdet.py \n    demo\u002Fmmdetection_cfg\u002Frtmdet_m_640-8xb32_coco-person.py \n    https:\u002F\u002Fdownload.openmmlab.com\u002Fmmpose\u002Fv1\u002Fprojects\u002Frtmposev2\u002Frtmpose-m_simcc-body7_pt-body7_420e-256x192.pth \n    --input demo\u002Fresources\u002Fdemo.jpg \n    --output-root vis_results\u002F\n```\n\n## Introduction\nMMPose is a comprehensive pose estimation toolbox from the OpenMMLab ecosystem. It supports diverse tasks from human body keypoints to hand gesture recognition and animal pose tracking, all through a consistent modular API backed by PyTorch.\n\n## What MMPose Does\n- Estimates 2D and 3D keypoints for human body, hands, face, and animals\n- Implements 30+ methods including HRNet, RTMPose, and ViTPose\n- Provides top-down and bottom-up pose estimation pipelines\n- Supports whole-body pose estimation combining body, hand, and face\n- Integrates with MMDetection for person detection before pose estimation\n\n## Architecture Overview\nMMPose follows a top-down or bottom-up paradigm. Top-down first detects each person with a bounding box (via MMDetection), then estimates keypoints within each box. Bottom-up detects all keypoints simultaneously and groups them by person. Both approaches use configurable backbones, heads, and codec modules managed by MMEngine.\n\n## Self-Hosting & Configuration\n- Install mmpose, mmengine, mmcv, and optionally mmdet via pip\n- Download model checkpoints from the MMPose model zoo\n- Use config files to select backbone, keypoint head, and dataset\n- Set input resolution to balance speed and accuracy\n- Deploy with MMDeploy for ONNX or TensorRT inference\n\n## Key Features\n- RTMPose models achieve real-time performance at high accuracy\n- Unified framework for body, hand, face, and animal keypoints\n- Extensive model zoo with pre-trained weights on COCO, MPII, and more\n- Modular codec system for keypoint encoding and decoding\n- Built-in visualization with skeleton overlay on images and video\n\n## Comparison with Similar Tools\n- **MediaPipe** — optimized for mobile and web but closed ecosystem; MMPose offers more research flexibility\n- **OpenPose** — pioneered real-time pose but is slower; RTMPose in MMPose is faster and more accurate\n- **Detectron2** — supports keypoint detection but with fewer pose-specific methods\n- **AlphaPose** — strong real-time performance but narrower scope than MMPose\n\n## FAQ\n**Q: Can MMPose track poses across video frames?**\nA: MMPose handles per-frame estimation. Combine with a tracker like ByteTrack for temporal tracking.\n\n**Q: Does it support 3D pose estimation?**\nA: Yes. MMPose includes 3D pose methods that lift 2D keypoints into 3D coordinates.\n\n**Q: What is RTMPose?**\nA: RTMPose is a real-time pose estimation model in MMPose that achieves state-of-the-art speed-accuracy tradeoffs.\n\n**Q: Can I train on custom keypoint definitions?**\nA: Yes. Define a custom dataset class with your keypoint schema and skeleton connectivity.\n\n## Sources\n- https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmpose\n- https:\u002F\u002Fmmpose.readthedocs.io\u002F","0",[22],{"id":23,"name":24,"slug":25,"icon":26},12,"Configs","config","⚙️",false,"bce279bc0d4990f7f3a77ff51b2ff300e456c1a23f843ec884f071b82fcb06ca","skill",[31,32,33],"claude_code","codex","gemini_cli","single",{"executes_code":27,"modifies_global_config":27,"requires_secrets":36,"uses_absolute_paths":27,"network_access":27},[],{"npm":38,"pip":39,"brew":44,"system":45},[],[40,41,42,43],"mmcv","mmdet","mmengine","mmpose",[],[],{"commands":47,"expected_files":48},[],[18],{"asset_kind":29,"target_tools":50,"install_mode":34,"entrypoint":18,"risk_profile":51,"dependencies":53,"content_hash":28,"verification":58,"inferred":61},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":52,"uses_absolute_paths":27,"network_access":27},[],{"npm":54,"pip":55,"brew":56,"system":57},[],[40,41,42,43],[],[],{"commands":59,"expected_files":60},[],[18],true,{"target":32,"score":63,"status":64,"policy":65,"why":66,"asset_kind":29,"install_mode":34},98,"native","allow",[67,68,69,70,71,72,73],"target_tools includes codex","asset_kind skill","install_mode single","markdown-only","policy allow","safe markdown-only Codex install","trust established",{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":28,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":77,"review_status":78,"signals":79},"established","hash_only",[],"unreviewed",[80,81,82],"author has published assets","content hash available","no dangerous capability badges",{"owner_uuid":9,"owner_name":10,"source_url":84,"content_hash":28,"visibility":17,"created_at":85,"updated_at":85},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-8d12bfc9","2026-05-12 16:35:37",null,[88,140,192,235],{"id":89,"uuid":90,"slug":91,"title":92,"description":93,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":94,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":95,"tags":96,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":98,"asset_kind":29,"target_tools":99,"install_mode":34,"entrypoint":100,"risk_profile":101,"dependencies":103,"verification":108,"agent_metadata":111,"agent_fit":123,"trust":125,"provenance":128,"created_at":130,"updated_at":131,"__relatedScore":132,"__relatedReasons":133,"__sharedTags":138},1941,"eeb52efe-3e25-11f1-9bc6-00163e2b0d79","mmdetection-open-source-object-detection-toolbox-pytorch-eeb52efe","MMDetection — Open-Source Object Detection Toolbox for PyTorch","MMDetection is an open-source object detection and instance segmentation toolbox from OpenMMLab, offering 300+ pretrained models and a modular config system built on PyTorch.",56,[],[97],{"id":23,"name":24,"slug":25,"icon":26},"96fa177a89f86df76229a8c24e570001346844b588b5d103d9364e8d5817aa08",[31,32,33],"MMDetection Object Detection",{"executes_code":27,"modifies_global_config":27,"requires_secrets":102,"uses_absolute_paths":27,"network_access":27},[],{"npm":104,"pip":105,"brew":106,"system":107},[],[],[],[],{"commands":109,"expected_files":110},[],[100],{"asset_kind":29,"target_tools":112,"install_mode":34,"entrypoint":100,"risk_profile":113,"dependencies":115,"content_hash":98,"verification":120},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":114,"uses_absolute_paths":27,"network_access":27},[],{"npm":116,"pip":117,"brew":118,"system":119},[],[],[],[],{"commands":121,"expected_files":122},[],[100],{"target":32,"score":63,"status":64,"policy":65,"why":124,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":98,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":126,"review_status":78,"signals":127},[],[80,81,82],{"owner_uuid":9,"owner_name":10,"source_url":129,"content_hash":98,"visibility":17,"created_at":130,"updated_at":131},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fmmdetection-open-source-object-detection-toolbox-pytorch-eeb52efe","2026-04-22 16:33:26","2026-05-12 18:23:46",97.63381228350873,[134,135,136,137],"topic-match","same-kind","same-target","same-author",[25,139],"configs",{"id":141,"uuid":142,"slug":143,"title":144,"description":145,"author_id":146,"author_name":147,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":17,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":148,"tags":149,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":155,"asset_kind":29,"target_tools":156,"install_mode":34,"entrypoint":157,"risk_profile":158,"dependencies":160,"verification":165,"agent_metadata":168,"agent_fit":180,"trust":182,"provenance":185,"created_at":187,"updated_at":188,"__relatedScore":189,"__relatedReasons":190,"__sharedTags":191},3249,"bca17f13-4ddd-11f1-9bc6-00163e2b0d79","asset-bca17f13","MMAction2 — OpenMMLab Video Understanding Toolbox","MMAction2 provides a modular framework for action recognition, temporal action detection, and spatial-temporal action detection with 20+ methods and support for major video benchmarks.","8a910e34-3180-11f1-9bc6-00163e2b0d79","Script Depot",[],[150],{"id":151,"name":152,"slug":153,"icon":154},11,"Scripts","script","📜","e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",[31,32,33],"SKILL.md",{"executes_code":27,"modifies_global_config":27,"requires_secrets":159,"uses_absolute_paths":27,"network_access":27},[],{"npm":161,"pip":162,"brew":163,"system":164},[],[],[],[],{"commands":166,"expected_files":167},[],[],{"asset_kind":29,"target_tools":169,"install_mode":34,"entrypoint":157,"risk_profile":170,"dependencies":172,"content_hash":155,"verification":177,"inferred":61},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":171,"uses_absolute_paths":27,"network_access":27},[],{"npm":173,"pip":174,"brew":175,"system":176},[],[],[],[],{"commands":178,"expected_files":179},[],[],{"target":32,"score":63,"status":64,"policy":65,"why":181,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":155,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":183,"review_status":78,"signals":184},[],[80,81,82],{"owner_uuid":146,"owner_name":147,"source_url":186,"content_hash":155,"visibility":17,"created_at":187,"updated_at":188},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-bca17f13","2026-05-12 16:36:57","2026-05-12 19:49:37",83.45154499349597,[134,135,136],[],{"id":193,"uuid":194,"slug":195,"title":196,"description":197,"author_id":146,"author_name":147,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":12,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":198,"tags":199,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":155,"asset_kind":29,"target_tools":201,"install_mode":34,"entrypoint":157,"risk_profile":202,"dependencies":204,"verification":209,"agent_metadata":212,"agent_fit":224,"trust":226,"provenance":229,"created_at":231,"updated_at":231,"__relatedScore":232,"__relatedReasons":233,"__sharedTags":234},3245,"7c63094f-4ddd-11f1-9bc6-00163e2b0d79","asset-7c63094f","MMSegmentation — OpenMMLab Semantic Segmentation Toolbox","MMSegmentation is a comprehensive semantic segmentation toolbox based on PyTorch, providing 50+ segmentation methods with a unified benchmark and modular design.",[],[200],{"id":151,"name":152,"slug":153,"icon":154},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":203,"uses_absolute_paths":27,"network_access":27},[],{"npm":205,"pip":206,"brew":207,"system":208},[],[],[],[],{"commands":210,"expected_files":211},[],[],{"asset_kind":29,"target_tools":213,"install_mode":34,"entrypoint":157,"risk_profile":214,"dependencies":216,"content_hash":155,"verification":221,"inferred":61},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":215,"uses_absolute_paths":27,"network_access":27},[],{"npm":217,"pip":218,"brew":219,"system":220},[],[],[],[],{"commands":222,"expected_files":223},[],[],{"target":32,"score":63,"status":64,"policy":65,"why":225,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":155,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":227,"review_status":78,"signals":228},[],[80,81,82],{"owner_uuid":146,"owner_name":147,"source_url":230,"content_hash":155,"visibility":17,"created_at":231,"updated_at":231},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-7c63094f","2026-05-12 16:35:09",80,[134,135,136],[],{"id":236,"uuid":237,"slug":238,"title":239,"description":240,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":241,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":242,"tags":243,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":245,"asset_kind":29,"target_tools":246,"install_mode":34,"entrypoint":247,"risk_profile":248,"dependencies":250,"verification":255,"agent_metadata":258,"agent_fit":270,"trust":272,"provenance":275,"created_at":277,"updated_at":278,"__relatedScore":279,"__relatedReasons":280,"__sharedTags":281},2049,"0cc624cb-3fb6-11f1-9bc6-00163e2b0d79","semantic-ui-human-friendly-ui-component-framework-0cc624cb","Semantic UI — Human-Friendly UI Component Framework","Semantic UI is a front-end component framework that uses human-friendly HTML class names to create responsive, themeable layouts. It provides over 50 UI components designed around natural language principles.",74,[],[244],{"id":23,"name":24,"slug":25,"icon":26},"5d728285be7d8bbab0bd0dccbe0be99994d97c0a4b3ddc85ec6068c18b79caf2",[31,32,33],"Semantic UI Overview",{"executes_code":27,"modifies_global_config":27,"requires_secrets":249,"uses_absolute_paths":27,"network_access":27},[],{"npm":251,"pip":252,"brew":253,"system":254},[],[],[],[],{"commands":256,"expected_files":257},[],[247],{"asset_kind":29,"target_tools":259,"install_mode":34,"entrypoint":247,"risk_profile":260,"dependencies":262,"content_hash":245,"verification":267},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":261,"uses_absolute_paths":27,"network_access":27},[],{"npm":263,"pip":264,"brew":265,"system":266},[],[],[],[],{"commands":268,"expected_files":269},[],[247],{"target":32,"score":63,"status":64,"policy":65,"why":271,"asset_kind":29,"install_mode":34},[67,68,69,70,71,72,73],{"author_trust_level":75,"verified_publisher":27,"asset_signed_hash":245,"signature_status":76,"install_count":12,"report_count":12,"dangerous_capability_badges":273,"review_status":78,"signals":274},[],[80,81,82],{"owner_uuid":9,"owner_name":10,"source_url":276,"content_hash":245,"visibility":17,"created_at":277,"updated_at":278},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fsemantic-ui-human-friendly-ui-component-framework-0cc624cb","2026-04-24 16:17:35","2026-05-12 17:52:25",71.81259189508755,[135,136,137],[25,139]]