Loading...
Anytime, Anywhere, All-the-Time
JoBot™ is the leading technology for psychological services
42

Patents

Patents

The inventions described across these patents share a common purpose: they make artificial intelligence systems behave more like trusted professionals. Instead of producing unpredictable or opaque answers, the technology ensures that AI follows clear rules, adapts responsibly when circumstances change, and remains accountable to human standards. At the core is the idea that AI should not only learn from large amounts of data but also respect the principles, knowledge, and practices that guide real people in fields such as psychology, law, medicine, and finance. Together, these patents provide a blueprint for building AI that is safer, more transparent, and more reliable in high-stakes situations. They are important because they show how advanced AI can be shaped to support human decision-making without replacing professional judgment, giving society both the benefits of automation and the assurance of ethical, rule-based conduct.

Hierarchical Locality Control and Multi-Granularity Recruitment Learning for Large Language Models
Patent Pending. 2515046765 (J. Diederich, 17 October 2025)

Abstract

A hierarchical artificial intelligence system providing continuously adjustable internal representations spanning localist (interpretable) to distributed (generalizable) encodings. The system comprises a base language model and recruited specialist models, each with independent locality control. An information-theoretic block recruitment mechanism adaptively allocates semantic blocks based on minimum description length criteria, eliminating the need for complete domain knowledge at initialization. Hierarchical LLM recruitment extends capacity allocation to entire specialized models when block-level adjustments prove insufficient. Mathematical proofs establish explicit threshold conditions (λi(τ,δ) = (C'/τ)e-δ/τ) under which attention provably concentrates on semantically relevant blocks with exponential bounds
(O(e-δ/τ)). Convergence guarantees ensure finite termination at both levels
(kmax = O(H(domains)/log|V|) for LLMs, pmax = O(√(n·H(Y)/log|V|)) for blocks per LLM). The locality dial enables real-time interpretability adjustment without retraining, supporting regulated domains including healthcare, finance, legal, and autonomous systems requiring both transparency and capability.
Localised LLM with Dynamic Locality Dial
Patent Pending. 2514985284 (J. Diederich, 29 September 2025)

Abstract

A method and system for training neural networks with dynamically adjustable internal representations ranging from localist (interpretable, rule-based) to distributed (generalizable, efficient). The system maintains a versioned rule store, compiles rules into differentiable constraints, and injects/updates constraints during training without restart ("hot reloading"). A tunable locality parameter controls group sparsity penalties, attention temperature, and anchor margins, enabling practitioners to position the model anywhere on the localist-distributed continuum. Information-theoretic principles guide anchor design to control attention entropy and pointer fidelity with provable bounds. A verification loop continuously checks outputs, triggering automatic rule updates to maintain compliance. The invention enables interpretable AI in regulated domains while preserving generalization capability, with applications in healthcare, finance, legal technology, autonomous systems, and other high-stakes fields requiring both transparency and performance.

Papers
Diederich, Joachim. “Localist LLMs — A Mathematical Framework for Dynamic Locality Control.” arXiv:2510.09338 (2025). https://arxiv.org/abs/2510.09338

Method and System for Dynamic Injection of Symbolic Rules into Neural Network Training to Produce Localized and Interpretable Representations
Patent Pending. 2514932879 (J. Diederich, 13 September 2025)

Abstract

The invention discloses a method and system for dynamically injecting symbolic rules into the training process of large-scale language models. A hot reloading mechanism enables rules to be added, updated, or withdrawn during training without restarting, acting as inductive anchors that produce compact, localized, and interpretable embeddings. The approach extends theoretical results from linear networks, where group sparsity guarantees localized solutions, to deep transformer architectures. Symbolic rules function as dynamic regularizers, enhancing alignment, compliance, and robustness. The invention further includes example embodiments in general-purpose and domain-specific AI systems, and a system architecture comprising a training engine, symbolic rule storage, and a hot reloading module.
A Safe AI Alignment Architecture with Dynamic Rule Revision and Hot Reloading
Patent Pending. 2514909646 (J. Diederich, 06 September 2025)

Abstract

This invention discloses a comprehensive architecture for ensuring the safety and alignment of artificial intelligence (AI) systems through dynamic rule management, hot reloading mechanisms, and translation-based compliance optimization. The system combines static workflow verification with continuous behavioral alignment using multimodal observation and symbolic rule cores. A key innovation is the dynamic rule revision framework that enables real-time updates to normative constraints while maintaining system consistency and auditability. The architecture includes hot reloading of rules into both symbolic verifiers and language model prompts, with empirical evidence that rule format translation improves AI compliance through attention mechanism optimization. The system provides formal safety guarantees through conservative approximation strategies while enabling continuous adaptation to evolving professional, legal, and ethical standards. Applications include healthcare, legal practice, financial services, defense systems, and other high-stakes professional domains requiring assured AI behavior.

Papers
Diederich, Joachim. “Rule Encoding and Compliance in Large Language Models: An Information-Theoretic Analysis.” arXiv:2510.05106 (2025). https://arxiv.org/abs/2510.05106

Integrated Verification and Alignment Architecture for Safe Agentic Artificial Intelligence
Patent Pending. 2025903712 (J. Diederich, 17 August 2025)

Abstract

This invention discloses a system and method for ensuring both the safety and alignment of agentic artificial intelligence (AI) systems. The approach combines workflow generation and static verification with multimodal neuro-symbolic alignment and continuous rule revision. AI agents generate batched workflows expressed in abstract syntax trees (ASTs), which are statically verified against explicit safety constraints before execution. Simultaneously, AI behaviour is aligned with individual professionals or institutions using multimodal observation (text, video, brain-computer interface signals) and a symbolic rule core. Continuous Rule Revision ensures rules remain current, tested, and versioned, and are hot-reloaded into both the symbolic verifier and language model prompts. The architecture provides dual enforcement—guiding AI generation and blocking unsafe execution—and produces immutable traces for auditability. The system is broadly applicable across professional, civil, and defence domains, including an embodiment where International Humanitarian Law (IHL) and Rules of Engagement (ROE) are hard-coded as non-defeasible top-priority rules. The invention delivers prevention, alignment, adaptability, and accountability, offering a comprehensive solution to AI safety in high-stakes contexts.
Multi-Media Extraction from Large Language Models
Patent Pending. 2514781609 (J. Diederich, 31 July 2025)

Abstract

Disclosed are methods, systems and computer-readable media for converting features discovered by Sparse Autoencoders (SAEs) trained on activations emitted by a previously trained large language model (LLM) into symbolic rules and multi-media explanations. The pipeline includes: (i) obtaining token-conditioned activations from the frozen LLM; (ii) training one or more SAEs to yield sparse, approximately monosemantic features; (iii) applying rule-induction (e.g., FOLD-SE-M) to derive human-interpretable rule sets; and (iv) rendering the rules as text, diagrams, audio, video or interactive scenes tailored to user profiles, including non-technical audiences, children and people with an intellectual disability. Optional modules fuse the rule sets with model-generated reasoning traces to provide step-by-step, provenance-aware explanations. The approach improves accessibility and auditability of mechanistic interpretability outputs while enabling consistent, multi-modal communication of an LLM’s behaviour without modifying LLM parameters.
Individual-Level AI Alignment via a Multimodal Neuro‑Symbolic Architecture with Continuous Rule Revision
Patent Pending. 2514816811 (J. Diederich, 11 August 2025)

Abstract

Conventional alignment methods rely on generalized human values, which are contested, dynamic, and often unsuitable for safety‑critical professional practice. This invention discloses a system and method for aligning artificial intelligence (AI) behaviour to a specific individual, particularly a working professional, by combining multimodal observation (verbal transcripts, video of professional activity, and brain‑computer interface (BCI) signals) with a neuro‑symbolic control architecture. A multimodal perception layer encodes text, video and BCI streams into a shared representation. A policy module proposes actions and explanations, which are strictly governed by a symbolic core comprising ontologies, deontic/temporal rules, and an explicit conflict‑resolution regime. The architecture includes a Continuous Rule Revision module that normalises, prioritises, tests and versions the rule set to preserve decidability, performance and legal compliance. Alignment is maintained through behaviour cloning and inverse reinforcement learning from observed practice, rule‑aware fine‑tuning of language models, and drift detection with abstention. The result is a faithful, auditable and continuously updated simulation of the professional that remains perfectly aligned with observed behaviour and applicable constraints.
The Personal Alignment of AI Systems
Patent Pending. 2514733904 (J. Diederich, 17 July 2025)

Abstract

Conventional approaches to AI alignment focus on broadly defined human values, despite the absence of a universally accepted set of belief systems. These generalised values are often inconsistent, subject to change over time, and may reflect psychological disturbances or contravene legal norms in particular jurisdictions. This invention provides a system and method for aligning AI behaviour with the daily, observed verbal behaviour of individual users—particularly working professionals—while ensuring compliance with established legal and ethical standards. The system incorporates ontologies for fact-checking and inference, and codifies key conversational rules using AIML structures and propositional or predicate logic. These mechanisms serve to regulate and personalise the AI's outputs, thereby enabling alignment at the level of the individual rather than at the population level.

For more information, email: jobot@jobot.ai