DDESONN
A Deep Dynamic Experimental Self-Organizing Neural Network Framework
Provides a fully native R deep learning framework for constructing, training, evaluating, and inspecting Deep Dynamic Ensemble Self Organizing Neural Networks at research scale. The core engine is an object oriented R6 class-based implementation with explicit control over layer layout, dimensional flow, forward propagation, back propagation, and transparent optimizer state updates. The framework does not rely on external deep learning back ends, enabling direct inspection of model state, reproducible numerical behavior, and fine grained architectural control without requiring compiled dependencies or graphics processing unit specific run times. Users can define dimension agnostic single layer or deep multi-layer networks without hard coded architecture limits, with per layer configuration vectors for activation functions, derivatives, dropout behavior, and initialization strategies automatically aligned to network depth through controlled replication or truncation. Reproducible workflows can be executed through high level helpers for fit, run, and predict across binary classification, multi-class classification, and regression modes. Training pipelines support optional self organization, adaptive learning rate behavior, and structured ensemble orchestration in which candidate models are evaluated under user specified performance metrics and selectively promoted or pruned to refine a primary ensemble, enabling controlled ensemble evolution over successive runs. Ensemble evaluation includes fused prediction strategies in which member outputs may be combined through weighted averaging, arithmetic averaging, or voting mechanisms to generate consolidated metrics for research level comparison and reproducible per-seed assessment. The framework supports multiple optimization approaches, including stochastic gradient descent, adaptive moment estimation, and look ahead methods, alongside configurable regularization controls such as L1, L2, and mixed penalties with separate weight and bias update logic. Evaluation features provide threshold tuning, relevance scoring, receiver operating characteristic and precision recall curve generation, area under curve computation, regression error diagnostics, and report ready metric outputs. The package also includes artifact path management, debug state utilities, structured run level metadata persistence capturing seeds, configuration states, thresholds, metrics, ensemble transitions, fused evaluation artifacts, and model identifiers, as well as reproducible scripts and vignettes documenting end to end experiments. Kingma and Ba (2015) <doi:10.48550/arXiv.1412.6980> "Adam: A Method for Stochastic Optimization". Hinton et al. (2012) <https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf> "Neural Networks for Machine Learning (RMSprop lecture notes)". Duchi et al. (2011) <https://jmlr.org/papers/v12/duchi11a.html> "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization". Zeiler (2012) <doi:10.48550/arXiv.1212.5701> "ADADELTA: An Adaptive Learning Rate Method". Zhang et al. (2019) <doi:10.48550/arXiv.1907.08610> "Lookahead Optimizer: k steps forward, 1 step back". You et al. (2019) <doi:10.48550/arXiv.1904.00962> "Large Batch Optimization for Deep Learning: Training BERT in 76 minutes (LAMB)". McMahan et al. (2013) <https://research.google.com/pubs/archive/41159.pdf> "Ad Click Prediction: a View from the Trenches (FTRL-Proximal)". Klambauer et al. (2017) <https://proceedings.neurips.cc/paper/6698-self-normalizing-neural-networks.pdf> "Self-Normalizing Neural Networks (SELU)". Maas et al. (2013) <https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf> "Rectifier Nonlinearities Improve Neural Network Acoustic Models (Leaky ReLU / rectifiers)".
README
# Techila Distributed Execution > This directory contains optional external execution scripts. > Techila integration is not required for normal package usage and is not a CRAN dependency. This directory provides helper scripts for running DDESONN experiments in a distributed Techila environment. These scripts: - Use the installed **DDESONN** package API (no `source()` and no direct `R/` file loading) - Require a configured **Techila** backend - Are **not required** for standard package usage DDESONN behaves identically in local execution mode. Techila support is provided solely to reduce wall-clock time for large experiment runs. --- ## Why Parallel Processing Is Ideal Here DDESONN workflows frequently involve **many independent runs** that are naturally parallelizable: - Randomized seed sweeps (e.g., 100–1,000 seeds) - Ensemble and temporary ensemble runs - Longer training schedules (e.g., 360+ epochs) Each run is typically **independent** (different seed, ensemble member, or iteration), meaning it can be dispatched to its own worker without affecting correctness. Sequential execution compounds wall-clock time quickly when individual training jobs take meaningful time. As a development benchmark reference: a 1,000-seed Keras comparison sweep required approximately **~2 days** end-to-end (roughly **~1 hour per 100 seeds**, give or take), primarily due to sequential execution. The same scaling principle applies to DDESONN. Once epoch counts increase (e.g., >10, and especially ~360), per-run wait time becomes significant, and parallel execution becomes the practical option for large sweeps. Techila is most beneficial when: - Running **many seeds** (hundreds or thousands) - Running **ensembles** (multiple candidate models) - Using **higher epoch counts** (e.g., 360+) - Needing throughput without modifying model logic Parallelization does not change model behavior or results — it reduces wall-clock time by distributing independent runs across workers. --- ## Execution Runners Two execution modes are provided for parity and validation: - `single_runner_local_mvp.R` Local execution using the installed DDESONN package. Useful for baseline validation and reproducibility. - `single_runner_techila_mvp.R` Distributed execution via Techila. Intended for large-scale or computationally intensive runs. Both runners call the same DDESONN package API and are designed to produce comparable outputs. --- ## Requirements To run the Techila scripts, you must have: - The **DDESONN** package installed - The **foreach** package installed - The **techila** package installed - A working Techila configuration on the submitting machine Techila support is optional and must be installed separately if used. --- ## Alternative Parallel Infrastructure While these scripts focus on Techila, the same distributed-run pattern can be implemented using other parallel compute environments. Examples include: - Microsoft Azure virtual machines or batch compute services - Amazon Web Services (EC2, Batch, or similar) - Any multi-core or multi-node cluster environment DDESONN’s seed-based and ensemble-based workflows are naturally parallelizable because individual runs are independent. If you implement and validate an alternative parallel backend that preserves output parity with the local runner, contributions are welcome via pull request.
Versions across snapshots
| Version | Repository | File | Size |
|---|---|---|---|
7.1.11 |
2026-04-09 windows/windows R-4.5 | DDESONN_7.1.11.zip |
10.5 MiB |