# Refact — Local-First AI Coding Assistant > Refact is an open-source, local-first AI coding assistant: install the IDE plugin, run local refact-lsp, and connect a model provider. ## Install Save as a script file and run: ## Quick Use 1. Install an IDE plugin (see docs): - VS Code: https://docs.refact.ai/installation/vs-code/ - JetBrains: https://docs.refact.ai/installation/jetbrains/ 2. Open a workspace and launch Refact; it starts the local `refact-lsp` engine. 3. In Provider Setup, connect a provider/runtime and pick default models for chat + agent work. ## Intro Refact is an open-source, local-first AI coding assistant: install the IDE plugin, run local refact-lsp, and connect a model provider. - **Best for:** developers who want a local agent engine embedded in the editor, with BYO providers and repeatable workflows - **Works with:** VS Code or JetBrains; local `refact-lsp` engine; multiple LLM providers/runtimes (see README) - **Setup time:** 10–30 minutes ## Practical Notes - Quant: the repo describes an IDE-embedded flow where the plugin runs a local `refact-lsp` engine per workspace. - Quant: validate setup by running one agent task end-to-end and measuring time saved over 3 repeated tasks (same prompt, same repo). ## What to standardize before rollout Refact becomes much more valuable when teams standardize: 1. **Provider policy**: which providers are allowed for which repos (open-source vs private). 2. **Default models**: one for chat, one for agent work, one for embeddings if needed. 3. **Task boundaries**: define which actions require explicit approval (deps updates, migrations, deploy scripts). ## Suggested first workflows - “Explain module X” + “write a unit test for function Y”. - “Refactor a file” with a measurable constraint (max 10 lines changed; no behavior change). - “Fix a failing test” with reproduction steps and a time budget. Use the same 2–3 workflows across the team so you can compare outcomes consistently. ### FAQ **Q: Is it only for chat?** A: No. The README positions it as an agent that can plan, execute, and iterate in engineering workflows. **Q: Do I have to use one provider?** A: No. It supports multiple provider families; choose per your policy. **Q: How do I avoid risky changes?** A: Define approval-gated actions and start with read/modify-only tasks before automating merges/deploys. ## Source & Thanks > Source: https://github.com/smallcloudai/refact > License: BSD-3-Clause > GitHub stars: 3,541 · forks: 310 --- ## 快速使用 1. 安装 IDE 插件(见文档): - VS Code:https://docs.refact.ai/installation/vs-code/ - JetBrains:https://docs.refact.ai/installation/jetbrains/ 2. 打开项目并启动 Refact;插件会运行本地 `refact-lsp` 引擎。 3. 在 Provider Setup 配置模型提供方/本地运行时,并选择 chat 与 agent 的默认模型。 ## 简介 Refact 是开源的本地优先 AI 编程助手:安装 IDE 插件后由本地 `refact-lsp` 引擎驱动,再配置模型提供方,即可在编辑器里跑问答、重构与端到端 agent 工作流等场景。 - **适合谁:** 希望在编辑器内运行本地 agent 引擎、并自选模型提供方的开发者 - **可搭配:** VS Code 或 JetBrains;本地 `refact-lsp` 引擎;多种 LLM 提供方/运行时(见 README) - **准备时间:** 10–30 分钟 ## 实战建议 - 量化信息:README 描述了“插件 + 本地 `refact-lsp` 引擎”的工作方式,按 workspace 启动。 - 量化信息:用同一任务重复 3 次,比较接入前后耗时,作为是否值得全团队推广的依据。 ## 推广前先标准化三件事 Refact 在团队里要“好用且不乱用”,关键是先把以下三点标准化: 1. **Provider 策略**:哪些仓库允许用哪些提供方(开源 vs 私有)。 2. **默认模型**:chat / agent / embeddings(如需)各选一个默认,避免每个人乱配。 3. **任务边界**:哪些动作必须显式批准(依赖升级、迁移脚本、发布脚本等)。 ## 推荐的首批工作流 - 解释模块 + 为关键函数补单测。 - 约束性重构(例如最多改 10 行、行为不变)。 - 修复失败用例(给复现步骤与时间预算)。 团队先统一 2–3 个固定流程,才能稳定对比效果。 ### FAQ **它只是聊天插件吗?** 答:不是。README 把它定位为能规划、执行并迭代的工程 Agent。 **必须固定一个 provider 吗?** 答:不必须。支持多种 provider;按团队策略选择即可。 **如何避免高风险改动?** 答:把高风险动作做成审批闸门,先从读取/小改动任务开始。 ## 来源与感谢 > Source: https://github.com/smallcloudai/refact > License: BSD-3-Clause > GitHub stars: 3,541 · forks: 310 --- Source: https://tokrepo.com/en/workflows/refact-local-first-ai-coding-assistant Author: AI Open Source