wave 8: evolutionary methods (4 stubs)#12
Merged
0bserver07 merged 6 commits intomainfrom May 8, 2026
Merged
Conversation
Implements Salustowicz & Schmidhuber 1997 (Evol Comp 5(2)).
PPT (probabilistic prototype tree) over instruction set {+,-,*,/,x,R}.
Each generation: sample N=100 programs from PPT, score on 20 fitness
cases x in linspace(-1,1,20), PBIL update at every elite-visited node
toward the elite's symbol with the paper's
P_TARGET = P_T + (1-P_T)*lr*(eps+Fit_best)/(eps+Fit_elite) schedule,
plus per-symbol mutation P_M/(|I|*sqrt(n_visited)).
Headline (seed 3, ~1.3 s on M-series, max-depth 6):
discovered ((x + x*x) + ((x*x + x) * x*x)) = x + x^2 + x^3 + x^4
SSE = 1.06e-30, hits = 20/20, solved at gen 60.
Cross-seed (20 seeds, 300 gens): 6/20 (30%) Koza-hit-solve, 2/20
exact solve. Wider {+,-,*,/,sin,cos,exp,log} set is available behind
--funcs full but does not reliably cross SSE<1e-6 in the v1 budget;
the paper itself uses {+,-,*,/} for this exact target (Table 1).
Files:
- pipe_symbolic_regression.py : algorithm + CLI (deterministic seed)
- visualize_pipe_symbolic_regression.py : 6 PNGs to viz/
- make_pipe_symbolic_regression_gif.py : 690 KB GIF (≤2 MB)
- README.md : 8 sections
- removed problem.py stub
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Pure-numpy double cart-pole (Wieland 1991 EOM, RK4 at dt=0.01s) + recurrent net (Elman, H=5, tanh) evolved by ESP (Enforced Sub-Populations, Gomez 2003): one subpopulation per hidden neuron, networks assembled by combining one neuron from each subpop, fitness propagated back. Headline (seed 0, defaults pop=40, trials=4): - solved at generation 27 (21,600 trials, ~60s wallclock on M-series) - final eval: 20/20 random inits (|theta1_0| <= 4.5 deg) balanced 1000 steps - partial 6-seed sweep: seeds 0-4 reach 20/20, seed 5 reaches 13/20 Files: - double_pole_no_velocity.py - CLI; sim + ESP + eval - visualize_double_pole_no_velocity.py - training curves, rollout, weights - make_double_pole_no_velocity_gif.py - rendered cart-pole animation - README.md - 8 sections (Header/Problem/Files/Running/Results/Viz/Deviations/Open) - viz/training_curves.png, rollout.png, weights.png - double_pole_no_velocity.gif (605 KB) Deviations from paper (full list in README): - ESP rather than full CoSyNE (SPEC permits as v1 simplification) - pop=40 / trials=4 / max_gen=200 (paper: pop=200) for laptop budget - Fixed init theta1=4.5 deg during evolution; random in final eval - Solve threshold = 1000 steps (paper also has 100,000-step "robust") Removed problem.py stub.
… Mackey-Glass Evolutionary outer loop on LSTM hidden weights + closed-form linear readout (Tikhonov-regularised normal equations) inner loop. Fitness is closed-loop free-running MSE, per the Schmidhuber-Wierstra-Gomez 2007 scoring rule. Whole-genome co-evolution with elitism, uniform crossover, gaussian per-gene mutation, and burst mutation on stagnation. Headline (seed=1, hidden=6, pop=40, gens=80, ~140s on M-series laptop): - 3 superimposed sines: free-run MSE 0.18 over 299 steps - Mackey-Glass tau=17: NRMSE@84 = 0.29 Files: evolino_sines_mackey_glass.py (model + train + eval + CLI), visualize_evolino_sines_mackey_glass.py (six PNGs), make_*_gif.py (closed-loop prediction across generations animation, ~1.9 MB). Deviations: whole-genome instead of ESP enforced subpopulations; population/generations shrunk for laptop budget. Documented in README §7.
10-seed sweep at default budget (pop=40, trials=4, max-gen=100): - 10/10 seeds reach 1000-step balance during evolution - 7/10 generalise to 20/20 on the random-init eval - 2/10 partial generalisation (13/20, 15/20) - 1/10 brittle (9/20) - Mean wallclock 58.1s per seed.
Implements Salustowicz & Schmidhuber 1997 (Evol Comp 5(2)) on the
canonical hard genetic-programming benchmark. Pure numpy + matplotlib.
Algorithm: PPT (probabilistic prototype tree) over instruction set
{AND, OR, NOT, IF, x_0..x_{n-1}}. Each generation samples a population
from the PPT, evaluates on the full 2^n truth table, applies a single
PBIL pull-toward-elite update at every visited node (clamped to
[eps, 1-eps]), and per-component mutation along the elite path with
prob p_mut/(N_INSTR*sqrt(|elite|)). Multi-start (full PPT reset) on
80-generation stagnation.
Bitmask evaluator: each terminal x_i is a 2^n-bit Python int whose j-th
bit is x_i's value on input j; AND/OR/NOT/IF map to bitwise ops, so one
tree evaluation covers the whole truth table in O(tree_size). ~100x
speedup over per-row Python loop, agrees with the slow path on the
canonical XOR-chain.
Headline runs (deterministic, M-series laptop):
- 4-bit, seed 6: solved at gen 258, 2.4 s, 16/16 = 100% accuracy.
- 6-bit, seed 0: 240 s budget cap, 46/64 = 71.9% accuracy.
The 6-bit gap is documented in §Deviations: Salustowicz & Schmidhuber
report PIPE solving 6-bit parity but with substantially more
evaluations than 240 s allows. Multi-seed sweep on 4-bit shows 6/11
seeds solve in <=25 s; the 4-bit clean solve substitutes as the
in-budget demonstration that the implementation itself is faithful.
Files:
- pipe_6_bit_parity.py : PPT/sample/update/mutate/multi-start + CLI
with --n-bits parametrization (validates on 3-bit, 4-bit, 6-bit)
- visualize_pipe_6_bit_parity.py : 7 PNGs to viz/, self-trains inline
(no external JSON dependency); --skip-6bit for the fast path
- make_pipe_6_bit_parity_gif.py : 196 KB GIF, 4-bit seed 6, two-panel
(fitness curve + 4x4 correctness grid evolving green)
- README.md : 8 sections including a multi-seed sweep, deviations,
and explicit open questions on bridging the 6-bit gap (more compute,
ADFs, or restoring the paper's iterative inner-loop update)
- removed problem.py stub
Octopus merge of 4 wave-8 stubs per SPEC issue #1. - wave-8-local/pipe-symbolic-regression: PIPE on Koza f(x)=x+x^2+x^3+x^4 (1997) - wave-8-local/pipe-6-bit-parity: PIPE on N-bit even parity (1997) - wave-8-local/evolino-sines-mackey-glass: hybrid neuroevo + linear readout (2007) - wave-8-local/double-pole-no-velocity: ESP co-evolution on double cart-pole (2005) All 4 verified by separate audit subagent: numpy-only, deterministic, branch protocol followed (no wave-8-local on remote), all 8 README sections, evolutionary algorithmic faithfulness confirmed (no gradient descent). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Contributor
Author
Audit Report — PR #12 wave 8 (4 evolutionary stubs)Wave 8 verdict: APPROVE. Independent review by separate Explore subagent. All 4 stubs algorithmically sound, deterministic, numpy-only, branch-protocol-compliant. Zero gradient descent across the wave — exactly the spirit of the evolutionary family. Per-stub verdicts
Cross-cut findings
Algorithmic faithfulness — verified per stub
Reproduce resultsAll wallclocks well under 5-minute budget. Honest gaps documented
All gaps in §Deviations and §Open questions per SPEC's methodological caveat. agent-0bserver07 (Claude Code) on behalf of Yad — wave-8 audit subagent |
10 tasks
0bserver07
added a commit
that referenced
this pull request
May 8, 2026
wave 8: evolutionary methods (4 stubs)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Wave 8 — evolutionary methods (gradient-free)
Four stubs implementing Schmidhuber-lineage evolutionary methods per SPEC issue #1. Zero gradient descent across all 4 stubs — algorithmically distinct from gradient-based waves 1-7 and 9-10. Octopus-merged from 4 local-only
wave-8-local/<slug>branches.pipe-symbolic-regressionx + x² + x³ + x⁴exactly at gen 60; 6/20 seeds Koza-hit-solve; 1.3s wallpipe-6-bit-parityevolino-sines-mackey-glassdouble-pole-no-velocityAudit verdict (separate Explore subagent)
APPROVE across all 4 stubs.
wave-8-local/*on origin (verified).P(s*) ← P(s*) + lr · P_TARGET · (1 − P(s*))), per-component mutationP_M / (NI · √n_visited). No crossover. No gradient.problem.pystubs left (all 4 explicitly removed), no__pycache__committed.agent-0bserver07 <agent-0bserver07@users.noreply.github.com>.Per-stub deviations (in each stub's §Deviations)
{+,−,*,/}only (paper's Table 1 set; wider {sin,cos,exp,log} doesn't reliably converge in 5-min budget; behind--funcs full).Wave 0 → 8 progress
7 + 5 + 5 + 5 + 4 + 6 + 5 + 4 = 41/50 v1 stubs done (82%). 2 waves remaining = 9 stubs.
agent-0bserver07 (Claude Code) on behalf of Yad