[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"workflow-asset-7c63094f":3,"seo:featured-workflow:7c63094f-4ddd-11f1-9bc6-00163e2b0d79:en":86,"workflow-related-asset-7c63094f-7c63094f-4ddd-11f1-9bc6-00163e2b0d79":87},{"id":4,"uuid":5,"slug":6,"title":7,"description":8,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":13,"parent_id":12,"parent_uuid":11,"lang_type":14,"steps":15,"tags":21,"has_voted":27,"visibility":13,"share_token":11,"is_featured":12,"content_hash":28,"asset_kind":29,"target_tools":30,"install_mode":34,"entrypoint":18,"risk_profile":35,"dependencies":37,"verification":45,"agent_metadata":48,"agent_fit":61,"trust":73,"provenance":82,"created_at":84,"updated_at":85},3245,"7c63094f-4ddd-11f1-9bc6-00163e2b0d79","asset-7c63094f","MMSegmentation — OpenMMLab Semantic Segmentation Toolbox","MMSegmentation is a comprehensive semantic segmentation toolbox based on PyTorch, providing 50+ segmentation methods with a unified benchmark and modular design.","8a910e34-3180-11f1-9bc6-00163e2b0d79","Script Depot","",0,1,"en",[16],{"id":17,"step_order":13,"title":18,"description":11,"prompt_template":19,"variables":11,"depends_on":20,"expected_output":11},3808,"MMSegmentation Toolbox","# MMSegmentation — OpenMMLab Semantic Segmentation Toolbox\n\n## Quick Use\n```bash\npip install mmsegmentation mmengine mmcv\npython -c \"\nfrom mmseg.apis import init_model, inference_model\nconfig = 'configs\u002Fpspnet\u002Fpspnet_r50-d8_4xb2-40k_cityscapes-512x1024.py'\ncheckpoint = 'pspnet_r50-d8_512x1024_40k_cityscapes.pth'\nmodel = init_model(config, checkpoint, device='cuda:0')\nresult = inference_model(model, 'demo.png')\nprint(result.pred_sem_seg.data.shape)\n\"\n```\n\n## Introduction\nMMSegmentation provides a unified platform for training and evaluating semantic segmentation models. Part of the OpenMMLab ecosystem, it offers a modular design that lets researchers mix and match backbones, decoders, and loss functions to rapidly prototype new architectures.\n\n## What MMSegmentation Does\n- Implements 50+ segmentation methods including DeepLab, PSPNet, and SegFormer\n- Supports 15+ benchmark datasets such as Cityscapes, ADE20K, and PASCAL VOC\n- Provides a modular config system to compose models from reusable components\n- Offers pre-trained weights for immediate inference and fine-tuning\n- Scales training across multiple GPUs with distributed data parallel\n\n## Architecture Overview\nMMSegmentation follows a registry-based architecture where backbones, decode heads, losses, and datasets are registered as interchangeable modules. A Python config file declares which components to assemble. The training loop is managed by MMEngine, which handles logging, checkpointing, and distributed coordination.\n\n## Self-Hosting & Configuration\n- Install mmsegmentation, mmengine, and mmcv via pip\n- Download pre-trained checkpoints from the model zoo\n- Modify config files to point to your dataset directory\n- Adjust batch size and learning rate for your GPU memory\n- Launch distributed training with torchrun or slurm scripts\n\n## Key Features\n- 50+ architectures with consistent training and evaluation APIs\n- Modular config system for rapid experimentation\n- Rich model zoo with pre-trained weights on major benchmarks\n- Support for Transformer-based and CNN-based segmentation\n- Built-in visualization tools for prediction overlays\n\n## Comparison with Similar Tools\n- **Detectron2** — broader scope (detection + segmentation); MMSeg focuses deeply on semantic segmentation\n- **torchvision** — fewer architectures and no unified config system\n- **segmentation_models.pytorch** — simpler API but lacks MMSeg's breadth of methods\n- **PaddleSeg** — similar scope within the PaddlePaddle ecosystem\n\n## FAQ\n**Q: Can I use custom datasets?**\nA: Yes. Implement a dataset class or convert your data to a supported format like Cityscapes.\n\n**Q: Does it support instance segmentation?**\nA: No. Use MMDetection for instance and panoptic segmentation tasks.\n\n**Q: Which backbone gives the best accuracy?**\nA: Swin Transformer and BEiT backbones currently lead ADE20K benchmarks when paired with UPerNet.\n\n**Q: Can I export models for deployment?**\nA: Yes. Use MMDeploy to convert models to ONNX, TensorRT, or OpenVINO formats.\n\n## Sources\n- https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation\n- https:\u002F\u002Fmmsegmentation.readthedocs.io\u002F","0",[22],{"id":23,"name":24,"slug":25,"icon":26},11,"Scripts","script","📜",false,"b50950e322b17d09f64c55a8d9816cc6083a23ebc3c2eb419539dd3779d7fe69","skill",[31,32,33],"claude_code","codex","gemini_cli","single",{"executes_code":27,"modifies_global_config":27,"requires_secrets":36,"uses_absolute_paths":27,"network_access":27},[],{"npm":38,"pip":39,"brew":43,"system":44},[],[40,41,42],"mmcv","mmengine","mmsegmentation",[],[],{"commands":46,"expected_files":47},[],[18],{"asset_kind":29,"target_tools":49,"install_mode":34,"entrypoint":18,"risk_profile":50,"dependencies":52,"content_hash":28,"verification":57,"inferred":60},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":51,"uses_absolute_paths":27,"network_access":27},[],{"npm":53,"pip":54,"brew":55,"system":56},[],[40,41,42],[],[],{"commands":58,"expected_files":59},[],[18],true,{"target":32,"score":62,"status":63,"policy":64,"why":65,"asset_kind":29,"install_mode":34},98,"native","allow",[66,67,68,69,70,71,72],"target_tools includes codex","asset_kind skill","install_mode single","markdown-only","policy allow","safe markdown-only Codex install","trust established",{"author_trust_level":74,"verified_publisher":27,"asset_signed_hash":28,"signature_status":75,"install_count":12,"report_count":12,"dangerous_capability_badges":76,"review_status":77,"signals":78},"established","hash_only",[],"unreviewed",[79,80,81],"author has published assets","content hash available","no dangerous capability badges",{"owner_uuid":9,"owner_name":10,"source_url":83,"content_hash":28,"visibility":13,"created_at":84,"updated_at":85},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-7c63094f","2026-05-12 16:35:09","2026-05-12 19:51:29",null,[88,140,193,247],{"id":89,"uuid":90,"slug":91,"title":92,"description":93,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":94,"parent_id":12,"parent_uuid":11,"lang_type":14,"steps":95,"tags":96,"has_voted":27,"visibility":13,"share_token":11,"is_featured":12,"content_hash":98,"asset_kind":29,"target_tools":99,"install_mode":34,"entrypoint":100,"risk_profile":101,"dependencies":103,"verification":108,"agent_metadata":111,"agent_fit":123,"trust":125,"provenance":128,"created_at":130,"updated_at":131,"__relatedScore":132,"__relatedReasons":133,"__sharedTags":138},3249,"bca17f13-4ddd-11f1-9bc6-00163e2b0d79","asset-bca17f13","MMAction2 — OpenMMLab Video Understanding Toolbox","MMAction2 provides a modular framework for action recognition, temporal action detection, and spatial-temporal action detection with 20+ methods and support for major video benchmarks.",7,[],[97],{"id":23,"name":24,"slug":25,"icon":26},"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",[31,32,33],"SKILL.md",{"executes_code":27,"modifies_global_config":27,"requires_secrets":102,"uses_absolute_paths":27,"network_access":27},[],{"npm":104,"pip":105,"brew":106,"system":107},[],[],[],[],{"commands":109,"expected_files":110},[],[],{"asset_kind":29,"target_tools":112,"install_mode":34,"entrypoint":100,"risk_profile":113,"dependencies":115,"content_hash":98,"verification":120,"inferred":60},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":114,"uses_absolute_paths":27,"network_access":27},[],{"npm":116,"pip":117,"brew":118,"system":119},[],[],[],[],{"commands":121,"expected_files":122},[],[],{"target":32,"score":62,"status":63,"policy":64,"why":124,"asset_kind":29,"install_mode":34},[66,67,68,69,70,71,72],{"author_trust_level":74,"verified_publisher":27,"asset_signed_hash":98,"signature_status":75,"install_count":12,"report_count":12,"dangerous_capability_badges":126,"review_status":77,"signals":127},[],[79,80,81],{"owner_uuid":9,"owner_name":10,"source_url":129,"content_hash":98,"visibility":13,"created_at":130,"updated_at":131},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-bca17f13","2026-05-12 16:36:57","2026-05-12 20:43:45",95.35463498048792,[134,135,136,137],"topic-match","same-kind","same-target","same-author",[25,139],"scripts",{"id":141,"uuid":142,"slug":143,"title":144,"description":145,"author_id":146,"author_name":147,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":148,"parent_id":12,"parent_uuid":11,"lang_type":14,"steps":149,"tags":150,"has_voted":27,"visibility":13,"share_token":11,"is_featured":12,"content_hash":156,"asset_kind":29,"target_tools":157,"install_mode":34,"entrypoint":158,"risk_profile":159,"dependencies":161,"verification":166,"agent_metadata":169,"agent_fit":181,"trust":183,"provenance":186,"created_at":188,"updated_at":189,"__relatedScore":190,"__relatedReasons":191,"__sharedTags":192},1941,"eeb52efe-3e25-11f1-9bc6-00163e2b0d79","mmdetection-open-source-object-detection-toolbox-pytorch-eeb52efe","MMDetection — Open-Source Object Detection Toolbox for PyTorch","MMDetection is an open-source object detection and instance segmentation toolbox from OpenMMLab, offering 300+ pretrained models and a modular config system built on PyTorch.","8a911193-3180-11f1-9bc6-00163e2b0d79","AI Open Source",56,[],[151],{"id":152,"name":153,"slug":154,"icon":155},12,"Configs","config","⚙️","96fa177a89f86df76229a8c24e570001346844b588b5d103d9364e8d5817aa08",[31,32,33],"MMDetection Object Detection",{"executes_code":27,"modifies_global_config":27,"requires_secrets":160,"uses_absolute_paths":27,"network_access":27},[],{"npm":162,"pip":163,"brew":164,"system":165},[],[],[],[],{"commands":167,"expected_files":168},[],[158],{"asset_kind":29,"target_tools":170,"install_mode":34,"entrypoint":158,"risk_profile":171,"dependencies":173,"content_hash":156,"verification":178},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":172,"uses_absolute_paths":27,"network_access":27},[],{"npm":174,"pip":175,"brew":176,"system":177},[],[],[],[],{"commands":179,"expected_files":180},[],[158],{"target":32,"score":62,"status":63,"policy":64,"why":182,"asset_kind":29,"install_mode":34},[66,67,68,69,70,71,72],{"author_trust_level":74,"verified_publisher":27,"asset_signed_hash":156,"signature_status":75,"install_count":12,"report_count":12,"dangerous_capability_badges":184,"review_status":77,"signals":185},[],[79,80,81],{"owner_uuid":146,"owner_name":147,"source_url":187,"content_hash":156,"visibility":13,"created_at":188,"updated_at":189},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fmmdetection-open-source-object-detection-toolbox-pytorch-eeb52efe","2026-04-22 16:33:26","2026-05-12 18:23:46",86.63381228350873,[134,135,136],[],{"id":194,"uuid":195,"slug":196,"title":197,"description":198,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":199,"parent_id":12,"parent_uuid":11,"lang_type":14,"steps":200,"tags":201,"has_voted":27,"visibility":13,"share_token":11,"is_featured":12,"content_hash":203,"asset_kind":29,"target_tools":204,"install_mode":34,"entrypoint":205,"risk_profile":206,"dependencies":208,"verification":213,"agent_metadata":216,"agent_fit":228,"trust":235,"provenance":240,"created_at":242,"updated_at":243,"__relatedScore":244,"__relatedReasons":245,"__sharedTags":246},1441,"1a259f83-3908-11f1-9bc6-00163e2b0d79","cube-open-source-semantic-layer-data-apps-1a259f83","Cube — Open Source Semantic Layer for Data Apps","Cube is a headless semantic layer that turns your warehouse into a reusable API for BI, embedded analytics, and AI — defining metrics once and serving them via SQL, REST, GraphQL, and MDX.",141,[],[202],{"id":23,"name":24,"slug":25,"icon":26},"32ea622fab6c0577ba151649ee899ef20e967caf9ff2416124885f303747a4ce",[31,32,33],"Cube Guide",{"executes_code":27,"modifies_global_config":27,"requires_secrets":207,"uses_absolute_paths":27,"network_access":60},[],{"npm":209,"pip":210,"brew":211,"system":212},[],[],[],[],{"commands":214,"expected_files":215},[],[205],{"asset_kind":29,"target_tools":217,"install_mode":34,"entrypoint":205,"risk_profile":218,"dependencies":220,"content_hash":203,"verification":225},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":219,"uses_absolute_paths":27,"network_access":60},[],{"npm":221,"pip":222,"brew":223,"system":224},[],[],[],[],{"commands":226,"expected_files":227},[],[205],{"target":32,"score":229,"status":230,"policy":231,"why":232,"asset_kind":29,"install_mode":34},64,"needs_confirmation","confirm",[66,67,68,233,234,72],"policy confirm","risk_profile.network_access is true",{"author_trust_level":74,"verified_publisher":27,"asset_signed_hash":203,"signature_status":75,"install_count":12,"report_count":12,"dangerous_capability_badges":236,"review_status":77,"signals":238},[237],"network_access",[239,79,80],"asset has usage views",{"owner_uuid":9,"owner_name":10,"source_url":241,"content_hash":203,"visibility":13,"created_at":242,"updated_at":243},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fcube-open-source-semantic-layer-data-apps-1a259f83","2026-04-16 04:17:18","2026-05-12 20:13:46",83.22843251657459,[134,135,136,137],[25,139],{"id":248,"uuid":249,"slug":250,"title":251,"description":252,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":253,"parent_id":12,"parent_uuid":11,"lang_type":14,"steps":254,"tags":255,"has_voted":27,"visibility":13,"share_token":11,"is_featured":12,"content_hash":257,"asset_kind":29,"target_tools":258,"install_mode":34,"entrypoint":259,"risk_profile":260,"dependencies":262,"verification":267,"agent_metadata":270,"agent_fit":282,"trust":284,"provenance":287,"created_at":289,"updated_at":290,"__relatedScore":291,"__relatedReasons":292,"__sharedTags":293},1982,"df5aef0a-3ead-11f1-9bc6-00163e2b0d79","daisyui-tailwind-css-component-library-semantic-class-names-df5aef0a","DaisyUI — Tailwind CSS Component Library with Semantic Class Names","A plugin for Tailwind CSS that adds component classes like btn, card, and modal so you can build UIs with short readable markup instead of long utility chains.",83,[],[256],{"id":23,"name":24,"slug":25,"icon":26},"f82c201093d0b55f63895eb570260d83aafd05ccd334c3d82d072962683edf9d",[31,32,33],"DaisyUI Tailwind Components",{"executes_code":27,"modifies_global_config":27,"requires_secrets":261,"uses_absolute_paths":27,"network_access":27},[],{"npm":263,"pip":264,"brew":265,"system":266},[],[],[],[],{"commands":268,"expected_files":269},[],[259],{"asset_kind":29,"target_tools":271,"install_mode":34,"entrypoint":259,"risk_profile":272,"dependencies":274,"content_hash":257,"verification":279},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":273,"uses_absolute_paths":27,"network_access":27},[],{"npm":275,"pip":276,"brew":277,"system":278},[],[],[],[],{"commands":280,"expected_files":281},[],[259],{"target":32,"score":62,"status":63,"policy":64,"why":283,"asset_kind":29,"install_mode":34},[66,67,68,69,70,71,72],{"author_trust_level":74,"verified_publisher":27,"asset_signed_hash":257,"signature_status":75,"install_count":12,"report_count":12,"dangerous_capability_badges":285,"review_status":77,"signals":286},[],[79,80,81],{"owner_uuid":9,"owner_name":10,"source_url":288,"content_hash":257,"visibility":13,"created_at":289,"updated_at":290},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fdaisyui-tailwind-css-component-library-semantic-class-names-df5aef0a","2026-04-23 08:46:32","2026-05-12 17:31:49",76.88641892909283,[134,135,136,137],[25,139]]