[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"workflow-asset-314fc52b":3,"seo:featured-workflow:314fc52b-4ea4-11f1-9bc6-00163e2b0d79:fr":83,"workflow-related-asset-314fc52b-314fc52b-4ea4-11f1-9bc6-00163e2b0d79":84},{"id":4,"uuid":5,"slug":6,"title":7,"description":8,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":13,"fork_count":12,"vote_count":12,"view_count":12,"parent_id":12,"parent_uuid":13,"lang_type":14,"steps":15,"tags":22,"has_voted":28,"visibility":18,"share_token":13,"is_featured":12,"content_hash":29,"asset_kind":30,"target_tools":31,"install_mode":35,"entrypoint":19,"risk_profile":36,"dependencies":38,"verification":43,"agent_metadata":46,"agent_fit":59,"trust":71,"provenance":80,"created_at":82,"updated_at":82},3575,"314fc52b-4ea4-11f1-9bc6-00163e2b0d79","asset-314fc52b","ggml — Lightweight Tensor Library for Machine Learning in C","ggml is a pure C tensor library optimized for running machine learning models on CPUs and edge devices, providing the foundational compute layer used by llama.cpp, whisper.cpp, and other popular local AI inference tools.","8a911193-3180-11f1-9bc6-00163e2b0d79","AI Open Source","https:\u002F\u002Ftokrepo.com\u002Fapple-touch-icon.png",0,"","en",[16],{"id":17,"step_order":18,"title":19,"description":13,"prompt_template":20,"variables":13,"depends_on":21,"expected_output":13},4135,1,"ggml Overview","# ggml — Lightweight Tensor Library for Machine Learning in C\n\n## Quick Use\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fggml.git\ncd ggml\ncmake -B build\ncmake --build build -j\n.\u002Fbuild\u002Fbin\u002Ftest-mul-mat\n```\n\n## Introduction\nggml is a tensor computation library written in C that focuses on efficient CPU inference for machine learning models. It is the engine behind llama.cpp and whisper.cpp, enabling millions of users to run large language models and speech recognition locally without requiring a GPU.\n\n## What ggml Does\n- Provides tensor operations optimized for CPU inference (AVX, AVX2, AVX-512, ARM NEON)\n- Supports integer quantization formats (Q4, Q5, Q8) to reduce memory usage\n- Implements automatic differentiation for training small models\n- Offers a computation graph API for defining and executing model architectures\n- Powers the GGUF model format used across the local AI ecosystem\n\n## Architecture Overview\nggml represents computations as a directed acyclic graph of tensor operations. Users build a computation graph by chaining operations, then execute it in a single pass. Memory is managed through a scratch buffer allocator that minimizes allocations. Quantization kernels are hand-optimized in C and assembly for each target architecture, achieving high throughput without GPU dependencies.\n\n## Self-Hosting & Configuration\n- Build with CMake on Linux, macOS, or Windows\n- No external dependencies beyond a C compiler\n- Enable BLAS backends (OpenBLAS, Apple Accelerate) for matrix multiply acceleration\n- Optional CUDA and Metal backends for GPU offloading\n- Configure quantization level based on available RAM vs. quality tradeoff\n\n## Key Features\n- Zero external dependencies for the core library\n- Aggressive quantization (4-bit, 5-bit) with minimal quality loss\n- Hand-tuned SIMD kernels for x86 and ARM platforms\n- Memory-mapped model loading for instant startup\n- Foundation of the GGUF ecosystem (llama.cpp, whisper.cpp, and more)\n\n## Comparison with Similar Tools\n- **PyTorch** — GPU-first training framework; ggml targets CPU inference and edge deployment\n- **ONNX Runtime** — cross-platform inference with graph optimization; ggml offers deeper quantization support\n- **TensorFlow Lite** — mobile inference runtime; ggml supports larger models via aggressive quantization\n- **Candle** — Rust ML framework by Hugging Face; ggml is C-based with broader quantization format support\n\n## FAQ\n**Q: Is ggml only for LLMs?**\nA: No, it supports general tensor operations. It powers speech, vision, and language models.\n\n**Q: What is the GGUF format?**\nA: GGUF is the model file format developed alongside ggml for storing quantized model weights with metadata.\n\n**Q: Does ggml support GPU acceleration?**\nA: Yes, optional CUDA and Metal backends can offload computation to GPUs, though CPU remains the primary target.\n\n**Q: How much RAM do quantized models need?**\nA: A 7B parameter model at Q4 quantization requires roughly 4 GB of RAM.\n\n## Sources\n- https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fggml","0",[23],{"id":24,"name":25,"slug":26,"icon":27},12,"Configs","config","⚙️",false,"120060743b91d8b333a1249efa4dbd65a06252ca9426b72dd5a69772c9e36fdc","skill",[32,33,34],"claude_code","codex","gemini_cli","single",{"executes_code":28,"modifies_global_config":28,"requires_secrets":37,"uses_absolute_paths":28,"network_access":28},[],{"npm":39,"pip":40,"brew":41,"system":42},[],[],[],[],{"commands":44,"expected_files":45},[],[19],{"asset_kind":30,"target_tools":47,"install_mode":35,"entrypoint":19,"risk_profile":48,"dependencies":50,"content_hash":29,"verification":55,"inferred":58},[32,33,34],{"executes_code":28,"modifies_global_config":28,"requires_secrets":49,"uses_absolute_paths":28,"network_access":28},[],{"npm":51,"pip":52,"brew":53,"system":54},[],[],[],[],{"commands":56,"expected_files":57},[],[19],true,{"target":33,"score":60,"status":61,"policy":62,"why":63,"asset_kind":30,"install_mode":35},98,"native","allow",[64,65,66,67,68,69,70],"target_tools includes codex","asset_kind skill","install_mode single","markdown-only","policy allow","safe markdown-only Codex install","trust established",{"author_trust_level":72,"verified_publisher":28,"asset_signed_hash":29,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":74,"review_status":75,"signals":76},"established","hash_only",[],"unreviewed",[77,78,79],"author has published assets","content hash available","no dangerous capability badges",{"owner_uuid":9,"owner_name":10,"source_url":81,"content_hash":29,"visibility":18,"created_at":82,"updated_at":82},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-314fc52b","2026-05-13 16:17:33",null,[85,138,185,232],{"id":86,"uuid":87,"slug":88,"title":89,"description":90,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":13,"fork_count":12,"vote_count":12,"view_count":91,"parent_id":12,"parent_uuid":13,"lang_type":14,"steps":92,"tags":93,"has_voted":28,"visibility":18,"share_token":13,"is_featured":12,"content_hash":95,"asset_kind":30,"target_tools":96,"install_mode":35,"entrypoint":97,"risk_profile":98,"dependencies":100,"verification":105,"agent_metadata":108,"agent_fit":120,"trust":122,"provenance":126,"created_at":128,"updated_at":129,"__relatedScore":130,"__relatedReasons":131,"__sharedTags":136},1029,"25e2a55c-35b8-11f1-9bc6-00163e2b0d79","lit-simple-library-fast-lightweight-web-components-25e2a55c","Lit — Simple Library for Fast Lightweight Web Components","Lit is a simple library for building fast, lightweight web components. Built by Google on top of the standard Web Components APIs, it provides reactive properties, scoped styles, and a declarative templating system in about 5KB.",114,[],[94],{"id":24,"name":25,"slug":26,"icon":27},"278a6c7c3593fbefefc74f3e6d12d47323a5c1ffbf362c251fb06ab568ef65c1",[32,33,34],"SKILL.md",{"executes_code":28,"modifies_global_config":28,"requires_secrets":99,"uses_absolute_paths":28,"network_access":28},[],{"npm":101,"pip":102,"brew":103,"system":104},[],[],[],[],{"commands":106,"expected_files":107},[],[13],{"asset_kind":30,"target_tools":109,"install_mode":35,"entrypoint":97,"risk_profile":110,"dependencies":112,"content_hash":95,"verification":117},[32,33,34],{"executes_code":28,"modifies_global_config":28,"requires_secrets":111,"uses_absolute_paths":28,"network_access":28},[],{"npm":113,"pip":114,"brew":115,"system":116},[],[],[],[],{"commands":118,"expected_files":119},[],[13],{"target":33,"score":60,"status":61,"policy":62,"why":121,"asset_kind":30,"install_mode":35},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":28,"asset_signed_hash":95,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":123,"review_status":75,"signals":124},[],[125,77,78,79],"asset has usage views",{"owner_uuid":9,"owner_name":10,"source_url":127,"content_hash":95,"visibility":18,"created_at":128,"updated_at":129},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Flit-simple-library-fast-lightweight-web-components-25e2a55c","2026-04-11 23:07:25","2026-05-13 12:45:16",103.09104676053042,[132,133,134,135],"topic-match","same-kind","same-target","same-author",[26,137],"configs",{"id":139,"uuid":140,"slug":141,"title":142,"description":143,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":13,"fork_count":12,"vote_count":12,"view_count":144,"parent_id":12,"parent_uuid":13,"lang_type":14,"steps":145,"tags":146,"has_voted":28,"visibility":18,"share_token":13,"is_featured":12,"content_hash":148,"asset_kind":30,"target_tools":149,"install_mode":35,"entrypoint":150,"risk_profile":151,"dependencies":153,"verification":158,"agent_metadata":161,"agent_fit":173,"trust":175,"provenance":178,"created_at":180,"updated_at":181,"__relatedScore":182,"__relatedReasons":183,"__sharedTags":184},2293,"0a0ce439-42dd-11f1-9bc6-00163e2b0d79","flower-federated-learning-framework-any-ml-platform-0a0ce439","Flower — Federated Learning Framework for Any ML Platform","A unified framework for federated learning and federated analytics that works with PyTorch, TensorFlow, JAX, or any machine learning library.",96,[],[147],{"id":24,"name":25,"slug":26,"icon":27},"39bb48dc1cd0ff235b933e9804d59cc4fa6cc39f4ca6d93bdd78a19834439d7f",[32,33,34],"Flower Overview",{"executes_code":28,"modifies_global_config":28,"requires_secrets":152,"uses_absolute_paths":28,"network_access":28},[],{"npm":154,"pip":155,"brew":156,"system":157},[],[],[],[],{"commands":159,"expected_files":160},[],[150],{"asset_kind":30,"target_tools":162,"install_mode":35,"entrypoint":150,"risk_profile":163,"dependencies":165,"content_hash":148,"verification":170},[32,33,34],{"executes_code":28,"modifies_global_config":28,"requires_secrets":164,"uses_absolute_paths":28,"network_access":28},[],{"npm":166,"pip":167,"brew":168,"system":169},[],[],[],[],{"commands":171,"expected_files":172},[],[150],{"target":33,"score":60,"status":61,"policy":62,"why":174,"asset_kind":30,"install_mode":35},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":28,"asset_signed_hash":148,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":176,"review_status":75,"signals":177},[],[77,78,79],{"owner_uuid":9,"owner_name":10,"source_url":179,"content_hash":148,"visibility":18,"created_at":180,"updated_at":181},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fflower-federated-learning-framework-any-ml-platform-0a0ce439","2026-04-28 16:34:14","2026-05-13 11:34:28",100.98015760139937,[132,133,134,135],[26,137],{"id":186,"uuid":187,"slug":188,"title":189,"description":190,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":13,"fork_count":12,"vote_count":12,"view_count":191,"parent_id":12,"parent_uuid":13,"lang_type":14,"steps":192,"tags":193,"has_voted":28,"visibility":18,"share_token":13,"is_featured":12,"content_hash":195,"asset_kind":30,"target_tools":196,"install_mode":35,"entrypoint":197,"risk_profile":198,"dependencies":200,"verification":205,"agent_metadata":208,"agent_fit":220,"trust":222,"provenance":225,"created_at":227,"updated_at":228,"__relatedScore":229,"__relatedReasons":230,"__sharedTags":231},1909,"b35d1d1e-3d9c-11f1-9bc6-00163e2b0d79","tinygrad-minimalist-deep-learning-framework-b35d1d1e","tinygrad — Minimalist Deep Learning Framework","tinygrad is a minimalist deep learning framework in under 10,000 lines of code. It provides a simple, hackable tensor library with automatic differentiation and multi-backend support spanning CPU, GPU, Apple Metal, and custom accelerators.",86,[],[194],{"id":24,"name":25,"slug":26,"icon":27},"6817dd955c599c55c07a00b2ff6d2f44afdfb990a187964c5aa70698db4ce9e7",[32,33,34],"tinygrad ML",{"executes_code":28,"modifies_global_config":28,"requires_secrets":199,"uses_absolute_paths":28,"network_access":28},[],{"npm":201,"pip":202,"brew":203,"system":204},[],[],[],[],{"commands":206,"expected_files":207},[],[197],{"asset_kind":30,"target_tools":209,"install_mode":35,"entrypoint":197,"risk_profile":210,"dependencies":212,"content_hash":195,"verification":217},[32,33,34],{"executes_code":28,"modifies_global_config":28,"requires_secrets":211,"uses_absolute_paths":28,"network_access":28},[],{"npm":213,"pip":214,"brew":215,"system":216},[],[],[],[],{"commands":218,"expected_files":219},[],[197],{"target":33,"score":60,"status":61,"policy":62,"why":221,"asset_kind":30,"install_mode":35},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":28,"asset_signed_hash":195,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":223,"review_status":75,"signals":224},[],[77,78,79],{"owner_uuid":9,"owner_name":10,"source_url":226,"content_hash":195,"visibility":18,"created_at":227,"updated_at":228},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Ftinygrad-minimalist-deep-learning-framework-b35d1d1e","2026-04-22 00:11:05","2026-05-13 16:40:30",100.90927887892792,[132,133,134,135],[26,137],{"id":233,"uuid":234,"slug":235,"title":236,"description":237,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":13,"fork_count":12,"vote_count":12,"view_count":238,"parent_id":12,"parent_uuid":13,"lang_type":14,"steps":239,"tags":240,"has_voted":28,"visibility":18,"share_token":13,"is_featured":12,"content_hash":242,"asset_kind":30,"target_tools":243,"install_mode":35,"entrypoint":244,"risk_profile":245,"dependencies":247,"verification":252,"agent_metadata":255,"agent_fit":267,"trust":269,"provenance":272,"created_at":274,"updated_at":275,"__relatedScore":276,"__relatedReasons":277,"__sharedTags":278},2501,"f785e2da-45bc-11f1-9bc6-00163e2b0d79","asset-f785e2da","einops — Flexible and Readable Tensor Operations","A library that provides a concise notation for tensor manipulations (reshape, transpose, reduce) that is framework-agnostic, self-documenting, and eliminates the error-prone dimension indexing common in deep learning code.",61,[],[241],{"id":24,"name":25,"slug":26,"icon":27},"d90a0e07b691afea83bea7ee2dcaaabe06ede2c2c1a47a21c812fbce4367c888",[32,33,34],"einops Tensor Operations",{"executes_code":28,"modifies_global_config":28,"requires_secrets":246,"uses_absolute_paths":28,"network_access":28},[],{"npm":248,"pip":249,"brew":250,"system":251},[],[],[],[],{"commands":253,"expected_files":254},[],[244],{"asset_kind":30,"target_tools":256,"install_mode":35,"entrypoint":244,"risk_profile":257,"dependencies":259,"content_hash":242,"verification":264},[32,33,34],{"executes_code":28,"modifies_global_config":28,"requires_secrets":258,"uses_absolute_paths":28,"network_access":28},[],{"npm":260,"pip":261,"brew":262,"system":263},[],[],[],[],{"commands":265,"expected_files":266},[],[244],{"target":33,"score":60,"status":61,"policy":62,"why":268,"asset_kind":30,"install_mode":35},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":28,"asset_signed_hash":242,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":270,"review_status":75,"signals":271},[],[77,78,79],{"owner_uuid":9,"owner_name":10,"source_url":273,"content_hash":242,"visibility":18,"created_at":274,"updated_at":275},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-f785e2da","2026-05-02 08:22:13","2026-05-13 10:24:18",100.68858753424738,[132,133,134,135],[26,137]]