Documentation
¶
Overview ¶
Package reverse_think is the Go-native realization of the ~/.claude/skills/reverse-think/ markdown skill: it sends a reverse-thinking prompt to an Anthropic-compatible endpoint (MiniMax by default) and returns a typed counterfactual.Deliverable.
Why a Go package, not a shell wrapper ¶
The markdown skill at ~/.claude/skills/reverse-think/SKILL.md is invoked manually by Claude (this dev) before non-trivial design decisions. Production agents (Flyto running in platform/) cannot read markdown skills -- they need a callable Go function whose return value plugs into staging.Record / hooks.HookResult / evolve.LogReplayer. Same prompt template, same JSON schema, but typed at the seams.
API key handling ¶
The package does NOT read MINIMAX_TOKEN_PLAN_KEY from the environment. Callers must inject Client.APIKey explicitly. Rationale: core stays environment-agnostic; platform / CLI / TUI each surface their own secret loading (e.g. platform layer reads env once at boot, the markdown skill reads core/.env via shell). Hard-coding env lookup here couples core to one secret-management style and breaks SDK use where callers want to pass a key from a vault, KMS, or test fixture.
max_tokens default ¶
Default is 8000. Per SKILL.md, the Anthropic-compatible MiniMax endpoint shares this budget between thinking and text output: setting it to 3000 once produced an empty text response (thinking consumed the entire pool). Token cost is free under the project's plan; calls per 5h is the only quota and is effectively unreachable. Do not shrink for "cost" reasons -- shrink only when the question is so simple that long thinking is wasteful, and even then favour an in-prompt instruction over reducing the pool.
反向思维 skill 的 Go 化实现 ¶
把 ~/.claude/skills/reverse-think/ 的 markdown skill 兑现为 Go 函数: 向 Anthropic 兼容端点 (默认 MiniMax) 发反向思维 prompt, 返回 typed counterfactual.Deliverable.
为什么做成 Go 包不做 shell wrapper: markdown skill 给 Claude (这个 dev) 在非平凡设计决策前手动调; 生产 Agent (Flyto 跑在 platform/) 读不了 markdown -- 需要可调 Go 函数, 返回值能直接 喂 staging.Record / hooks.HookResult / evolve.LogReplayer. 同 prompt 模板, 同 JSON schema, 但接口处类型化.
API key 处理: 本包不读 MINIMAX_TOKEN_PLAN_KEY 环境变量, 调用方显式注入 Client.APIKey. 理由: core 与环境解耦; platform / CLI / TUI 各自决定密钥 加载方式. 在本包硬编码 env 读取会把 core 绑定到一种 secret 管理风格, 破坏 SDK 用法 (从 vault / KMS / test fixture 传 key).
max_tokens 默认 8000: 见 SKILL.md, Anthropic 兼容端点这个预算是 thinking + text 共享池, 设 3000 实测过 thinking 用完导致 text 空响应. token 计费按 项目 plan 免费, 唯一限额是 calls/5h 实际碰不到. 不要为 "省钱" 缩, 只在 问题极简 thinking 浪费时缩, 即便如此也优选 prompt 内指令限 thinking 而非 减小 pool.
Reference files ¶
client.go -- Client + Run (HTTP roundtrip, JSON parse, Validate) template.go -- Chinese prompt renderer (SKILL.md template aligned) client_test.go -- httptest-backed roundtrip cases (no real MiniMax)
Index ¶
Constants ¶
const ( // DefaultEndpoint is the MiniMax Anthropic-compatible messages // endpoint per ~/.claude/skills/reverse-think/SKILL.md. External // clients can swap to any Anthropic-protocol-compatible server. // // DefaultEndpoint 是 SKILL.md 记录的 MiniMax Anthropic 兼容 messages // 端点. 外部调用方可换成任何 Anthropic 协议兼容服务器. DefaultEndpoint = "https://api.minimaxi.com/anthropic/v1/messages" // DefaultModel is the recommended highspeed variant. Slower variants // (e.g. MiniMax-M2.7) work but burn more wall-time on the same call // budget. // // DefaultModel 是推荐的 highspeed 变体. 慢变体 (如 MiniMax-M2.7) 也能 // 跑但同 call 配额下延迟更高. DefaultModel = "MiniMax-M2.7-highspeed" // DefaultMaxTokens is the thinking + text shared budget. Setting // below 4000 risks empty text output (thinking eats the pool); see // SKILL.md "Important" callout. // // DefaultMaxTokens 是 thinking + text 共享预算. 低于 4000 风险 text // 输出空 (thinking 吃光池); 见 SKILL.md "Important" 段. DefaultMaxTokens = 8000 // DefaultTimeout caps a single Run call. MiniMax highspeed typically // returns 5-15s on substantive prompts (per SKILL.md empirical data); // a 90s cap leaves slack for cold-start latency without hanging the // caller indefinitely. // // DefaultTimeout 限单次 Run. MiniMax highspeed 实测 5-15s (SKILL.md 数据); // 90s 给冷启动留 slack 同时避免无限挂调用方. DefaultTimeout = 90 * time.Second )
Defaults for Client fields. Each is overridable on construction.
Client 字段默认值, 构造时可覆盖.
Variables ¶
var ( ErrAPIKeyRequired = errors.New("reverse_think: APIKey required") ErrEndpointFailed = errors.New("reverse_think: endpoint returned non-2xx") ErrNoTextContent = errors.New("reverse_think: response has no text content block") ErrParseDeliverable = errors.New("reverse_think: failed to parse Deliverable JSON from response") )
Errors returned by Client.Run.
Client.Run 返回的错误.
var ErrPromptIncomplete = errors.New("reverse_think: prompt missing required field")
ErrPromptIncomplete is returned by Render when any required field of Prompt is empty. Required = Scenario, OptionA, OptionB, Recommendation, RecommendationReason. The reverse pass cannot meaningfully challenge a recommendation that was never spelled out.
ErrPromptIncomplete 在 Prompt 任一必填字段为空时由 Render 返回. 必填 = Scenario / OptionA / OptionB / Recommendation / RecommendationReason. 反向思维无法挑战一个没说清楚的方案.
Functions ¶
func Render ¶
Render produces the Chinese prompt string sent to the LLM. Output matches the SKILL.md template byte-for-byte under identical inputs so regression in prompt drift is catchable by golden test (see template_test).
Render 产出发给 LLM 的中文 prompt 字符串. 输入相同时输出与 SKILL.md 模板字节对齐, prompt 漂移可由 golden test 锁住 (见 template_test).
Types ¶
type Annotations ¶
Annotations are the metadata the LLM does not know but the Deliverable schema records: the originating tool name, which CLAUDE.md article-1 step is being run, and an optional decision identifier for downstream correlation.
Annotations 是 LLM 不知道但 Deliverable schema 要记录的元数据: 触发的 工具名 / 跑哪一步 (CLAUDE.md 五步) / 可选关联决策 id.
type Client ¶
type Client struct {
// APIKey for the Anthropic-compatible endpoint. Required.
//
// APIKey 调用 Anthropic 兼容端点的密钥. 必填.
APIKey string
// Endpoint URL. Empty = DefaultEndpoint.
//
// Endpoint URL. 空 = DefaultEndpoint.
Endpoint string
// Model name. Empty = DefaultModel.
//
// Model 名. 空 = DefaultModel.
Model string
// MaxTokens is the shared thinking + text budget. Zero = DefaultMaxTokens.
//
// MaxTokens 是 thinking + text 共享预算. 零值 = DefaultMaxTokens.
MaxTokens int
// HTTPClient is the underlying HTTP client. Nil = a new
// http.Client with DefaultTimeout. Inject for test (httptest server)
// or for custom transport (proxy / mTLS).
//
// HTTPClient 是底层 HTTP 客户端. nil = 新 http.Client + DefaultTimeout.
// 注入用于测试 (httptest 服务器) 或自定义传输 (代理 / mTLS).
HTTPClient *http.Client
// Now returns the current time. Override in test for deterministic
// OccurredAt; nil = time.Now.
//
// Now 返回当前时间. 测试中覆盖以拿到确定 OccurredAt; nil = time.Now.
Now func() time.Time
}
Client wraps the HTTP roundtrip + prompt rendering + Deliverable parsing. Zero value is unusable -- APIKey must be set; other fields default at Run time when zero.
Client 封装 HTTP roundtrip + prompt 渲染 + Deliverable 解析. 零值不可用 -- APIKey 必填; 其他字段零值时 Run 时使用默认值.
func (*Client) Run ¶
func (c *Client) Run(ctx context.Context, p Prompt, annotations Annotations) (*counterfactual.Deliverable, error)
Run renders the prompt, hits the endpoint, parses the response, and returns a validated counterfactual.Deliverable. ToolName, Step, DecisionID are taken from the supplied annotations argument and stamped onto the returned Deliverable; OccurredAt is stamped from c.Now() (or time.Now if c.Now is nil).
CLEVER: We do NOT auto-strip code-fence wrappers (```json...```). Per SKILL.md prompt the LLM is told "直接输出 JSON, 不要任何额外文字"; if it still wraps, that is an LLM compliance bug worth surfacing as ErrParseDeliverable rather than papering over silently. Replacement behaviour is hard to make safe (a fence inside a string field gets mangled).
Run 渲染 prompt, 命中端点, 解析响应, 返回经 Validate 的 Deliverable. ToolName / Step / DecisionID 由 annotations 参数提供并 stamp 入返回 Deliverable; OccurredAt 由 c.Now() (c.Now 为 nil 时用 time.Now) 打戳.
精妙 (CLEVER): 不自动剥 code-fence 包装 (```json...```). SKILL.md 已让 LLM "直接输出 JSON, 不要任何额外文字"; LLM 仍包装是其遵守度 bug, 应以 ErrParseDeliverable 暴露而非静默兜底. 替换行为难做安全 (字符串字段内的 fence 会被弄坏).
type Prompt ¶
type Prompt struct {
// Scenario describes the concrete context: language, files, fields,
// current state. Required.
//
// Scenario 描述具体场景: 语言 / 文件 / 字段 / 当前状态. 必填.
Scenario string
// OptionA is the recommendation in one line.
//
// OptionA 是倾向选项 (一句话).
OptionA string
// OptionB is the alternative in one line.
//
// OptionB 是替代选项 (一句话).
OptionB string
// Recommendation is "A" or "B" -- which option the producer leans
// toward before the reverse pass.
//
// Recommendation 是 "A" 或 "B" -- 反向思维前生产方倾向哪个.
Recommendation string
// RecommendationReason is two to three reasons backing Recommendation.
//
// RecommendationReason 是支持 Recommendation 的 2-3 个理由.
RecommendationReason string
}
Prompt carries the variables the Chinese reverse-thinking template renders. Field names align with the markdown SKILL.md template at ~/.claude/skills/reverse-think/SKILL.md.
Prompt 携带渲染中文反向思维模板需要的变量. 字段名对齐 ~/.claude/skills/reverse-think/SKILL.md 模板.