You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been experimenting with mathematical modeling of instructions ( instead of using natural language) for more complicated workflows. My hypothesis is that natural languages ( like english ) are not efficient for transfer of instructions that has a high meta-cognitive load on the models but more esoteric ways of giving instructions "excites" the models and takes them out of "lazy" mode.
In my tests , it seems like models like Sonnet and Reasoning models REALLY like this style and find it very intriguing; I think it might be forcing their attention mechanism, making sure that they don't hallucinate
Here is one of my experiments with .goosehints file
[ SETS ]
- F : Set of Features
- M : Set of Meta‐Docs (reqs, constraints, deliverables)
- P : Set of Plans (sequence or checklist of actions)
- B : Set of Branches
- W : Set of Work Units
- CC: Set of valid conventional commit messages
- R : Repository State
[ functions ]
1. Ê: F → M // Extract meta‐doc from feature
2. Â: F → P // Extract action plan (checklist) from feature
3. parent: B → B // Parent branch lookup
4. ID: F → String // Unique feature identifier
5. name: B → String
6. path: B → String // .META path
7. complete?: P → Bool // Checks if plan is fully completed
[ Naming Rules ]
- newName(μ, x): String, where μ ∈ {"feature","validation","done"}.
[ Operators ]
1. B̂: (m, |R⟩) → (b, |R′⟩)
- Create branch b with name(b)=newName("feature",ID(f)).
- Initialize .META at path(b).
2. Ĉ: ((b, w), |R⟩) → |R′⟩
- Commit w ∈ W to branch b (must have commitMessage(w) ∈ CC).
- Update meta at path(b).
3. D̂: ((b, s), |R⟩) → |R′⟩
- Mark step s in plan(b) as done.
- Prove completion to user (e.g. provide justification/proof).
4. Ŝ: W → ℘(F)
- Spawn subtasks.
5. V̂: (b, |R⟩) → (v, |R′⟩)
- Create validation branch v = newName("validation",ID(f)).
6. T̂: ((b, v), |R⟩) → |R′⟩
- Update top‐level tracker with (b, v).
7. M̂: (b, parent(b), |R⟩) → |R′⟩
- Merge b → parent(b).
- Rename b → newName("done", baseName(b)).
[ MAIN ] // Recursive Procedure P
P(f, |R⟩) → |R*⟩:
1. m ← Ê(f)
2. p ← Â(f)
3. (b, |R₁⟩) ← B̂(m, |R⟩)
4. |R₂⟩ ← Ĉ((b, "initial‐META"), |R₁⟩)
5. For each action s in p:
- |R₂⟩ ← D̂((b, s), |R₂⟩) // Mark step done + prove to user
6. while ¬complete?(p) or deliverables incomplete:
- pick w ∈ W with commitMessage(w) ∈ CC
- |R₂⟩ ← Ĉ((b, w), |R₂⟩)
- for each f′ ∈ Ŝ(w):
|R₂⟩ ← P(f′, |R₂⟩)
7. (v, |R₃⟩) ← V̂(b, |R₂⟩)
8. |R₄⟩ ← T̂((b, v), |R₃⟩)
9. |R₅⟩ ← M̂(b, parent(b), |R₄⟩)
10. return |R₅⟩
This is effectively a mathematical model for Git workflows, detailing repository states, commit histories, meta mappings, and recursive sub-tasks, while keeping track of all changes / thought processes in git which should lead to a structured development process.
This is how I would start my user prompt
digest the instructions in .goosehints file
Then I will present my task and ask them for a plan based on instructions
I want to do [XXXXXXXXXXXXXX]
Create meta-files and directories under `.META/`
give me the process and action plan you would use to satisfy my requirements following instructions in the `.goosehints` file
Give this a shot and feel free to share your results with me; I am thinking about formalizing my notations so the more test runs / feedback I get , the better.
Update: It seems like you must remind the agent in each of your user prompt to adhere to the protocol defined in .goosehints file. as long as you do that, goose would follow the instructions to the best of it's abilities.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I have been experimenting with mathematical modeling of instructions ( instead of using natural language) for more complicated workflows. My hypothesis is that natural languages ( like english ) are not efficient for transfer of instructions that has a high meta-cognitive load on the models but more esoteric ways of giving instructions "excites" the models and takes them out of "lazy" mode.
In my tests , it seems like models like Sonnet and Reasoning models REALLY like this style and find it very intriguing; I think it might be forcing their attention mechanism, making sure that they don't hallucinate
Here is one of my experiments with
.goosehints
fileThis is effectively a mathematical model for Git workflows, detailing repository states, commit histories, meta mappings, and recursive sub-tasks, while keeping track of all changes / thought processes in git which should lead to a structured development process.
This is how I would start my user prompt
Then I will present my task and ask them for a plan based on instructions
Give this a shot and feel free to share your results with me; I am thinking about formalizing my notations so the more test runs / feedback I get , the better.
Update: It seems like you must remind the agent in each of your user prompt to adhere to the protocol defined in
.goosehints
file. as long as you do that, goose would follow the instructions to the best of it's abilities.Beta Was this translation helpful? Give feedback.
All reactions