[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"workflow-asset-9c1b4b6e":3,"seo:featured-workflow:9c1b4b6e-4ddd-11f1-9bc6-00163e2b0d79:es":83,"workflow-related-asset-9c1b4b6e-9c1b4b6e-4ddd-11f1-9bc6-00163e2b0d79":84},{"id":4,"uuid":5,"slug":6,"title":7,"description":8,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":12,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":14,"tags":21,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":28,"asset_kind":29,"target_tools":30,"install_mode":34,"entrypoint":18,"risk_profile":35,"dependencies":37,"verification":43,"agent_metadata":46,"agent_fit":59,"trust":71,"provenance":80,"created_at":82,"updated_at":82},3247,"9c1b4b6e-4ddd-11f1-9bc6-00163e2b0d79","asset-9c1b4b6e","Hyperopt — Distributed Hyperparameter Optimization in Python","Hyperopt uses Tree of Parzen Estimators and random search to efficiently optimize hyperparameters for machine learning models, with optional distributed execution via MongoDB.","8a910e34-3180-11f1-9bc6-00163e2b0d79","Script Depot","",0,"en",[15],{"id":16,"step_order":17,"title":18,"description":11,"prompt_template":19,"variables":11,"depends_on":20,"expected_output":11},3810,1,"Hyperopt Optimization","# Hyperopt — Distributed Hyperparameter Optimization in Python\n\n## Quick Use\n```bash\npip install hyperopt\npython -c \"\nfrom hyperopt import fmin, tpe, hp\nbest = fmin(\n    fn=lambda x: (x - 2) ** 2,\n    space=hp.uniform('x', -10, 10),\n    algo=tpe.suggest,\n    max_evals=100\n)\nprint(best)\n\"\n```\n\n## Introduction\nHyperopt is a Python library for serial and parallel optimization over search spaces that may include real-valued, discrete, and conditional dimensions. Its Tree of Parzen Estimators (TPE) algorithm is widely used in machine learning to find optimal hyperparameter configurations faster than grid or random search.\n\n## What Hyperopt Does\n- Optimizes any black-box function over complex search spaces\n- Implements TPE, random search, and adaptive TPE algorithms\n- Supports conditional and nested hyperparameter definitions\n- Distributes trials across workers via MongoDB for parallel search\n- Stores trial history for analysis and warm-starting future runs\n\n## Architecture Overview\nHyperopt separates the objective function, search space definition, and optimization algorithm. The fmin driver iterates by asking the algorithm (e.g., TPE) to suggest a point, evaluating the objective, and recording the result in a Trials object. For distributed operation, MongoTrials replaces the in-memory store with a MongoDB-backed queue that multiple workers consume.\n\n## Self-Hosting & Configuration\n- Install via pip; add pymongo for distributed trials\n- Define search spaces using hp.uniform, hp.choice, hp.loguniform, etc.\n- Set max_evals to control the total number of evaluations\n- Use MongoTrials with a running MongoDB instance for parallel workers\n- Launch hyperopt-mongo-worker processes on each machine\n\n## Key Features\n- TPE algorithm finds good configurations with fewer evaluations than grid search\n- Expressive search space language with conditional parameters via hp.choice\n- Trials object stores all results for post-hoc analysis and plotting\n- Scales horizontally with MongoDB-backed distributed trials\n- Lightweight dependency footprint suitable for any ML framework\n\n## Comparison with Similar Tools\n- **Optuna** — more modern API with pruning and dashboard; Hyperopt's TPE is well-established in academic literature\n- **Ray Tune** — broader scope with scheduler integration; Hyperopt is simpler for single-machine use\n- **Scikit-Optimize** — Bayesian optimization with Gaussian processes; Hyperopt's TPE handles categorical parameters more naturally\n- **Nevergrad** — gradient-free optimization focused on numerical problems; Hyperopt is tuned for ML hyperparameters\n\n## FAQ\n**Q: What is TPE and why use it?**\nA: Tree of Parzen Estimators models the search space probabilistically, sampling more from regions that produced good results. It is more sample-efficient than random search.\n\n**Q: Can Hyperopt optimize neural network architectures?**\nA: Yes. Use hp.choice to define conditional spaces that represent different layer configurations.\n\n**Q: How do I resume an interrupted search?**\nA: Pass a previously saved Trials object to fmin. It will continue from where it left off.\n\n**Q: Does Hyperopt support early stopping?**\nA: Not natively. Use Optuna or wrap your objective to raise an exception for unpromising trials.\n\n## Sources\n- https:\u002F\u002Fgithub.com\u002Fhyperopt\u002Fhyperopt\n- http:\u002F\u002Fhyperopt.github.io\u002Fhyperopt\u002F","0",[22],{"id":23,"name":24,"slug":25,"icon":26},11,"Scripts","script","📜",false,"f9450060cceecdbf4d9dc9a4fd8443cc2af5532e9c7e9b0322e463030cddd904","skill",[31,32,33],"claude_code","codex","gemini_cli","single",{"executes_code":27,"modifies_global_config":27,"requires_secrets":36,"uses_absolute_paths":27,"network_access":27},[],{"npm":38,"pip":39,"brew":41,"system":42},[],[40],"hyperopt",[],[],{"commands":44,"expected_files":45},[],[18],{"asset_kind":29,"target_tools":47,"install_mode":34,"entrypoint":18,"risk_profile":48,"dependencies":50,"content_hash":28,"verification":55,"inferred":58},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":49,"uses_absolute_paths":27,"network_access":27},[],{"npm":51,"pip":52,"brew":53,"system":54},[],[40],[],[],{"commands":56,"expected_files":57},[],[18],true,{"target":32,"score":60,"status":61,"policy":62,"why":63,"asset_kind":29,"install_mode":34},98,"native","allow",[64,65,66,67,68,69,70],"target_tools includes codex","asset_kind skill","install_mode single","markdown-only","policy allow","safe markdown-only Codex install","trust established",{"author_trust_level":72,"verified_publisher":27,"asset_signed_hash":28,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":74,"review_status":75,"signals":76},"established","hash_only",[],"unreviewed",[77,78,79],"author has published assets","content hash available","no dangerous capability badges",{"owner_uuid":9,"owner_name":10,"source_url":81,"content_hash":28,"visibility":17,"created_at":82,"updated_at":82},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-9c1b4b6e","2026-05-12 16:36:02",null,[85,137,184,230],{"id":86,"uuid":87,"slug":88,"title":89,"description":90,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":91,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":92,"tags":93,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":95,"asset_kind":29,"target_tools":96,"install_mode":34,"entrypoint":97,"risk_profile":98,"dependencies":100,"verification":105,"agent_metadata":108,"agent_fit":120,"trust":122,"provenance":125,"created_at":127,"updated_at":128,"__relatedScore":129,"__relatedReasons":130,"__sharedTags":135},1916,"ad894074-3d9d-11f1-9bc6-00163e2b0d79","optuna-automatic-hyperparameter-optimization-framework-ad894074","Optuna — Automatic Hyperparameter Optimization Framework","Optuna is an automatic hyperparameter optimization framework for machine learning. It provides an imperative define-by-run API that lets you construct search spaces dynamically, with built-in pruning, visualization, and distributed optimization across multiple workers.",70,[],[94],{"id":23,"name":24,"slug":25,"icon":26},"71705f39396ce51488ae7955b95283dcf0f0b40592f043a09a0463e5f727ee44",[31,32,33],"Optuna HPO",{"executes_code":27,"modifies_global_config":27,"requires_secrets":99,"uses_absolute_paths":27,"network_access":27},[],{"npm":101,"pip":102,"brew":103,"system":104},[],[],[],[],{"commands":106,"expected_files":107},[],[97],{"asset_kind":29,"target_tools":109,"install_mode":34,"entrypoint":97,"risk_profile":110,"dependencies":112,"content_hash":95,"verification":117},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":111,"uses_absolute_paths":27,"network_access":27},[],{"npm":113,"pip":114,"brew":115,"system":116},[],[],[],[],{"commands":118,"expected_files":119},[],[97],{"target":32,"score":60,"status":61,"policy":62,"why":121,"asset_kind":29,"install_mode":34},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":27,"asset_signed_hash":95,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":123,"review_status":75,"signals":124},[],[77,78,79],{"owner_uuid":9,"owner_name":10,"source_url":126,"content_hash":95,"visibility":17,"created_at":127,"updated_at":128},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Foptuna-automatic-hyperparameter-optimization-framework-ad894074","2026-04-22 00:18:05","2026-05-12 05:07:37",120.77688752307861,[131,132,133,134],"topic-match","same-kind","same-target","same-author",[25,136],"scripts",{"id":138,"uuid":139,"slug":140,"title":141,"description":142,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":143,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":144,"tags":145,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":147,"asset_kind":29,"target_tools":148,"install_mode":34,"entrypoint":149,"risk_profile":150,"dependencies":152,"verification":157,"agent_metadata":160,"agent_fit":172,"trust":174,"provenance":177,"created_at":179,"updated_at":180,"__relatedScore":181,"__relatedReasons":182,"__sharedTags":183},1301,"b0f2e5e4-37db-11f1-9bc6-00163e2b0d79","ray-distributed-computing-python-ai-workloads-b0f2e5e4","Ray — Distributed Computing for Python and AI Workloads","Ray is a unified framework for scaling Python and AI applications. From distributed training and hyperparameter search to large-scale data processing and model serving — Ray powers the infrastructure behind ChatGPT, Uber, and Pinterest.",84,[],[146],{"id":23,"name":24,"slug":25,"icon":26},"2525c38e9ed9d7542c94e70cb0552db293ca53899bb7ccd1a6edd2d2713b1ab0",[31,32,33],"SKILL.md",{"executes_code":27,"modifies_global_config":27,"requires_secrets":151,"uses_absolute_paths":27,"network_access":27},[],{"npm":153,"pip":154,"brew":155,"system":156},[],[],[],[],{"commands":158,"expected_files":159},[],[11],{"asset_kind":29,"target_tools":161,"install_mode":34,"entrypoint":149,"risk_profile":162,"dependencies":164,"content_hash":147,"verification":169},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":163,"uses_absolute_paths":27,"network_access":27},[],{"npm":165,"pip":166,"brew":167,"system":168},[],[],[],[],{"commands":170,"expected_files":171},[],[11],{"target":32,"score":60,"status":61,"policy":62,"why":173,"asset_kind":29,"install_mode":34},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":27,"asset_signed_hash":147,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":175,"review_status":75,"signals":176},[],[77,78,79],{"owner_uuid":9,"owner_name":10,"source_url":178,"content_hash":147,"visibility":17,"created_at":179,"updated_at":180},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fray-distributed-computing-python-ai-workloads-b0f2e5e4","2026-04-14 16:26:53","2026-05-12 17:17:57",114.89412838857145,[131,132,133,134],[25,136],{"id":185,"uuid":186,"slug":187,"title":188,"description":189,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":190,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":191,"tags":192,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":194,"asset_kind":29,"target_tools":195,"install_mode":34,"entrypoint":149,"risk_profile":196,"dependencies":198,"verification":203,"agent_metadata":206,"agent_fit":218,"trust":220,"provenance":223,"created_at":225,"updated_at":226,"__relatedScore":227,"__relatedReasons":228,"__sharedTags":229},3251,"da7cf503-4ddd-11f1-9bc6-00163e2b0d79","asset-da7cf503","Nevergrad — Gradient-Free Optimization by Meta","Nevergrad is a gradient-free optimization platform from Meta Research providing a unified interface to derivative-free optimizers for hyperparameter tuning, reinforcement learning, and scientific computing.",5,[],[193],{"id":23,"name":24,"slug":25,"icon":26},"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":197,"uses_absolute_paths":27,"network_access":27},[],{"npm":199,"pip":200,"brew":201,"system":202},[],[],[],[],{"commands":204,"expected_files":205},[],[],{"asset_kind":29,"target_tools":207,"install_mode":34,"entrypoint":149,"risk_profile":208,"dependencies":210,"content_hash":194,"verification":215,"inferred":58},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":209,"uses_absolute_paths":27,"network_access":27},[],{"npm":211,"pip":212,"brew":213,"system":214},[],[],[],[],{"commands":216,"expected_files":217},[],[],{"target":32,"score":60,"status":61,"policy":62,"why":219,"asset_kind":29,"install_mode":34},[64,65,66,67,68,69,70],{"author_trust_level":72,"verified_publisher":27,"asset_signed_hash":194,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":221,"review_status":75,"signals":222},[],[77,78,79],{"owner_uuid":9,"owner_name":10,"source_url":224,"content_hash":194,"visibility":17,"created_at":225,"updated_at":226},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Fasset-da7cf503","2026-05-12 16:37:47","2026-05-12 19:49:18",93.16722687557547,[131,132,133,134],[25,136],{"id":231,"uuid":232,"slug":233,"title":234,"description":235,"author_id":9,"author_name":10,"author_avatar":11,"token_estimate":12,"time_saved":12,"model_used":11,"fork_count":12,"vote_count":12,"view_count":236,"parent_id":12,"parent_uuid":11,"lang_type":13,"steps":237,"tags":238,"has_voted":27,"visibility":17,"share_token":11,"is_featured":12,"content_hash":240,"asset_kind":29,"target_tools":241,"install_mode":34,"entrypoint":242,"risk_profile":243,"dependencies":245,"verification":250,"agent_metadata":253,"agent_fit":265,"trust":272,"provenance":276,"created_at":278,"updated_at":279,"__relatedScore":280,"__relatedReasons":281,"__sharedTags":282},1461,"4772853c-3920-11f1-9bc6-00163e2b0d79","tikv-distributed-transactional-key-value-store-raft-4772853c","TiKV — Distributed Transactional Key-Value Store on Raft","A CNCF-graduated distributed key-value store written in Rust that powers TiDB. Provides horizontal scaling, strong consistency via Raft, geo-replication, and ACID transactions with Percolator-style MVCC.",57,[],[239],{"id":23,"name":24,"slug":25,"icon":26},"4578ab35923f76a87c715ccf94f981088240a37cb1f50ba5c2ad0c22792cc872",[31,32,33],"TiKV Guide",{"executes_code":27,"modifies_global_config":27,"requires_secrets":244,"uses_absolute_paths":27,"network_access":58},[],{"npm":246,"pip":247,"brew":248,"system":249},[],[],[],[],{"commands":251,"expected_files":252},[],[242],{"asset_kind":29,"target_tools":254,"install_mode":34,"entrypoint":242,"risk_profile":255,"dependencies":257,"content_hash":240,"verification":262},[31,32,33],{"executes_code":27,"modifies_global_config":27,"requires_secrets":256,"uses_absolute_paths":27,"network_access":58},[],{"npm":258,"pip":259,"brew":260,"system":261},[],[],[],[],{"commands":263,"expected_files":264},[],[242],{"target":32,"score":266,"status":267,"policy":268,"why":269,"asset_kind":29,"install_mode":34},64,"needs_confirmation","confirm",[64,65,66,270,271,70],"policy confirm","risk_profile.network_access is true",{"author_trust_level":72,"verified_publisher":27,"asset_signed_hash":240,"signature_status":73,"install_count":12,"report_count":12,"dangerous_capability_badges":273,"review_status":75,"signals":275},[274],"network_access",[77,78],{"owner_uuid":9,"owner_name":10,"source_url":277,"content_hash":240,"visibility":17,"created_at":278,"updated_at":279},"https:\u002F\u002Ftokrepo.com\u002Fen\u002Fworkflows\u002Ftikv-distributed-transactional-key-value-store-raft-4772853c","2026-04-16 07:10:22","2026-05-12 10:21:43",85.6451419903444,[131,132,133,134],[25,136]]