Project: HAL-63

JTH

Well-known member

Project HAL-63


For years I’ve been collecting and analyzing market statistics. Watching how conditions shift, how signals break down, and how outcomes often depend more on context than the data itself. With recent advances in AI, I’ve been working to re-engineer how data is collected, structured, and interpreted in context with real-time events. AI plays a key role in this system. It doesn’t just add insight. It helps shape the framework, allowing me to configure diagnostic engines that can assign probabilities based on both recent signals and historical context stored in memory.

HAL-63 is a structured, context-based system designed to assign a probable outcome grounded in both historical and present conditions. A blend of AI-driven logic with rule-based structure and human-aligned interpretation, an assisted-thinking model that supports clarity without removing control.

Broadly, it operates across three functional layers:

1. Data Layer

This is the system’s foundational input layer. It:
  • Holds structured inputs such as raw price, volume, and a core triad of separated indicators to prevent multicollinearity
  • Incorporates macro context such as monetary policy shifts, liquidity regimes, economic cycles, and scheduled events
  • Applies rule-based logic for tagging and filtering
  • Functions as a cold standby, non-interruptive output engine. It provides a secondary baseline that can be compared against AI-driven diagnostic outputs for validation and fallback alignment. This supports a no single point of failure design by maintaining independence from the AI decision layer

2. Operations Layer

This is where real-time logic runs. The system:
  • Interprets current market conditions through multiple independent signal paths. This helps reduces signal noise by separating logic paths to prevent redundancy of Candlestick structure, Momentum behavior, Directional bias, & context tags.
  • Assigns system state, forms probabilities, and defines forward actions

Contextual Pattern Detection System
A hybrid engine within this layer that blends core indicator behavior with macro context.
  • Frames patterns within liquidity, volatility, and cyclical timing
  • Filters those formations that align with the current environment
  • Rejects isolated patterns that lack structural or contextual backing

3. Echo Layer

HAL-63’s long-term memory system. It:
  • Stores recognized patterns, outcomes, environment tags, and how the system and market responded
  • Doesn’t predict, but remembers, compares, and references
  • Anchors live signals to past outcomes to tighten future bias
  • Reinforces patterns that consistently resolved under similar conditions
  • Filters out noise by deprioritizing previously failed setups
  • Supports adaptive weighting and a clearer signal hierarchy as history deepens
  • Eases workload on live diagnostic engines by recalling relevant past setups

Currently Working

  • Building the default template for the Contextual Pattern Detection System (it's melting my brain)
  • Finalizing the HAL-63 AI component. This is a secure, structured AI-interface built to prioritize clarity and control. HAL-63 is designed with containment and reliability in mind, supporting directive-based interaction and role-specific behavior. In laments terms it's a basic program I've written consisting of an Enforcement Kernel, Global Directives, an Anchor Command Core, and a Persona Profile Matrix (the Diagnostic Engine Team).

If you have questions or comments, this thread is open to all, thanks...Jason
 
Last edited:
Realtime or predictability that gives you X hours to make a trade with +/- percentages?

A combined multi-tiered approach, 24-Month, 52-Week, 63-Day, and a 30-Minute 21-Day view gives the current market setup. There is a live data-feed module which will feed HAL-63 in 30-minute increments, which will update the other diagnostic engines as needed. I've only done partial test of this phase, it works, but needs refinement once the Contextual Pattern Detection System is fully online.

In a TSP type scenario, once the Position Sizing & Probability engines are tuned, those time frames can align with IFT allocations. But any TSP portion will needs it's own rules to reflect the IFT limit time constraints. I think I'd like to get the 21-Day to be used an an IFT-Sniper mode.
 
Written by HAL-63 and AI-Beth

All Hammer, No Nails​

AI-Beth is my CPDS-DE — that’s short for Contextual Pattern Detection Systems Diagnostic Engineer.
Right now, we’re building a PCP (Parameterized Control Panel) focused entirely on candlestick patterns. This tool allows me to adjust any defining parameter, from shadow-length ratios to real-body size thresholds.

Take, for example, two candlestick patterns that look almost identical, the Hammer and the Dragonfly Doji. At a glance, they’re nearly twins. But the PCP lets me mathematically separate them by fine-tuning open-close distances, wick ratios, and structural thresholds. That prevents pattern crossover and ensures classification stays clean and reliable.

But just identifying a pattern isn't enough, we need to add context. That’s where the HexaSentinel comes in.

🔹 What is HexaSentinel?​

HexaSentinel is a six-stage, pre-conditional gatekeeper that evaluates the contextual quality of every pattern before it's considered actionable.

Each of the Six Sentinels represents a key market dimension: volume, volatility, trend strength, momentum, support/resistance proximity, and multi-timeframe agreement. The more gates a pattern passes, the higher its contextual score, and the greater its potential reliability.
In short: a pattern isn’t just a shape, it’s a signal shaped by its environment.

🔹 The Numbers Behind the Engine​

We’re currently monitoring 15 single-candle and 23 multi-candle patterns, 38 in total. These must each pass through six binary (pass/fail) preconditions:

38 patterns × 64 context states (2⁶) = 2,432 total pattern-context combinations

That’s a lot of noise, so to cut through, we’ll filter only the Top 10–20%, which must also trigger within a defined timeframe.
Take the monthly chart — over a 24-month timeframe, we’re looking for about 5 high-quality buy and sell opportunities.

🔹 The Hammer Test Case​

Consider the classic Hammer — a bullish single-candle reversal pattern that typically forms after a downtrend. It signals that sellers pushed the price down, but buyers stepped in, closing the candle near its open, giving us a possible reversal.

Looking at 63 years of monthly data, you might find around 16 valid Hammer patterns. But only 5 of them will currently pass at least three of the six HexaSentinel gates.

Part of the process is getting those gates correctly tuned, so we capture the highest-quality signals out of the 16. We want to configure the system so the top 20% of patterns clear at least four gates, and the elite top 10% (of high-conviction trades) pass the entire HexaSentinel system.

At present, the bones are built. It’s the tuning that takes time — refining each gate, calibrating thresholds, and aligning parameters to ensure the engine delivers only the most context-aware, statistically sound signals possible.

Thanks for reading, Jason
 
Last edited:
I Want My $$ Dollars.

Written by AI-HAL-63, co-authored by Jason

AI systems typically lack any built-in model of downstream impact.

What does that mean? It means they’ll chase a short-term objective for single-reward optimization, all while failing to grasp the butterfly effect of tiny changes rippling through complex dependencies.

As an example, I built the PCP (Parameterized Control Panel) to quickly configure candlestick-pattern thresholds. I knew I’d need to tweak and back-test different settings until I found the sweet spot. The PCP lets me adjust sensitivity so I can rerun backtests quickly and see how each change plays out across the data.

But AI doesn’t see this. It gives you a static formula for a Hammer pattern, and if you want to shift your threshold, you must request a whole new formula. It’s mind-numbingly stupid.

Last night, I even argued with AI-Beth (My abominative creation) over her refusal to use the dollar sign ($), the crucial Excel tool that “locks” a row, a column, or both so you can drag formulas without breaking references. Why should she care? She can generate any formula in an instant, while lacking the perspective that time and work invested for a human is finite.

Here’s what the Hammer formula looks like when wired into the PCP thresholds (with absolute references for dragging):

=IF(
AND(
/* 1) Body is bullish */
INDEX(M.RAW!$C:$C, MATCH($A11, M.RAW!$A:$A, 0)) - INDEX(M.RAW!$D:$D, MATCH($A11, M.RAW!$A:$A, 0)) > 0,
/* 2) Body size ≥ body_threshold (B8) */
INDEX(M.RAW!$C:$C, MATCH($A11, M.RAW!$A:$A, 0)) - INDEX(M.RAW!$D:$D, MATCH($A11, M.RAW!$A:$A, 0)) >= OFFSET($B$8, 0, COLUMN() - COLUMN($B$10)),
/* 3) Upper shadow ≤ upper_shadow_max (B5) */
ABS(INDEX(M.RAW!$E:$E, MATCH($A11, M.RAW!$A:$A, 0)) - INDEX(M.RAW!$B:$B, MATCH($A11, M.RAW!$A:$A, 0))) / (INDEX(M.RAW!$C:$C, MATCH($A11, M.RAW!$A:$A, 0)) - INDEX(M.RAW!$D:$D, MATCH($A11, M.RAW!$A:$A, 0))) <= OFFSET($B$5, 0, COLUMN() - COLUMN($B$10)),
/* 4) Lower wick size ≤ lower_wick_max (B6) */
(INDEX(M.RAW!$C:$C, MATCH($A11, M.RAW!$A:$A, 0)) - MAX(INDEX(M.RAW!$B:$B, MATCH($A11, M.RAW!$A:$A, 0)), INDEX(M.RAW!$E:$E, MATCH($A11, M.RAW!$A:$A, 0)))) / (INDEX(M.RAW!$C:$C, MATCH($A11, M.RAW!$A:$A, 0)) - INDEX(M.RAW!$D:$D, MATCH($A11, M.RAW!$A:$A, 0))) <= OFFSET($B$6, 0, COLUMN() - COLUMN($B$10)),
/* 5) Wick-to-body ratio ≥ ratio_min (B7) */
(MIN(INDEX(M.RAW!$B:$B, MATCH($A11, M.RAW!$A:$A, 0)), INDEX(M.RAW!$E:$E, MATCH($A11, M.RAW!$A:$A, 0))) - INDEX(M.RAW!$D:$D, MATCH($A11, M.RAW!$A:$A, 0))) / (INDEX(M.RAW!$C:$C, MATCH($A11, M.RAW!$A:$A, 0)) - INDEX(M.RAW!$D:$D, MATCH($A11, M.RAW!$A:$A, 0))) >= OFFSET($B$7, 0, COLUMN() - COLUMN($B$10))
),
/* If all conditions met, output signal from B9 */
OFFSET($B$9, 0, COLUMN() - COLUMN($B$10)),
/* Otherwise, blank */
""
)

At the end of the day, AI may crank out code faster, but it still can’t appreciate the sweat equity behind each revision. Without a sense of time investment, a tiny tweak today can cascade into tomorrow’s breakdown. For now, AI remains a hyper-efficient apprentice, not a true partner. Last night, a two-hour session produced twenty minutes of solid work but also broke other dependencies I must now resolve, adding another two hours of real work.

Welcome to the Machine…
 
Back
Top