Yo-Yo.ai Docs
  • Yo-Yo.ai Gitbook Repository
  • Prediction: Accuracy
    • Overview
    • Technical
  • Model Output
  • Case Study
  • Conclusion
  • Prediction: Alpha
    • Overview
  • Methodology
  • Feature Engineering
  • Model Ensemble Implementation
  • Risk Management
  • Model Testing and Backtesting
    • Testing Methodology
    • Backtesting Framework
    • Hybrid Strategy Implementation
    • Results
    • Key Findings
    • APPENDIX A:
  • Vaults
    • How To
  • Product Road Map
  • FAQ
  • YOYO Token
    • Tokenomics
      • The YOYO Token
      • Token Utility
      • Token Metrics
  • Branding
    • Logos
  • Media
  • Legal Documents
    • Terms and Conditions
    • Privacy Policy
    • Disclaimer
  • Official Links
    • Official Links
Powered by GitBook
On this page
  • 1.1 Consistent Prediction Accuracy Across Time Interval
  • 1.2 Proprietary Neural Architecture & Resilience
  • 1.3 Proprietary Model Enhancements
  1. Prediction: Accuracy

Technical

1.1 Consistent Prediction Accuracy Across Time Interval

The Yo-Yo model achieves sustained accuracy of 75% across intervals from 10 seconds to 15 minutes, a critical improvement over traditional forecasting systems that often lose reliability over extended periods. Traditional forecasting models struggle with consistency over extended periods, however, Yo-Yo’s model maintains accuracy fluctuations even under volatile conditions.

1.2 Proprietary Neural Architecture & Resilience

Yo-Yo’s proprietary neural network maintains robustness immediately after training, minimizing data overfitting and leakage. It also demonstrates generalizability, maintaining accuracy when exposed to new, previously unseen price movements.

Yo-Yo’s prediction model implements a proprietary Decomposed Long Short-Term Memory (DLSTM) network, an enhanced variant of traditional LSTM architecture. LSTMs are specialized recurrent neural networks (RNNs) engineered to capture long-term dependencies in sequential data by addressing the vanishing gradient problem inherent in standard RNNs. Through carefully designed memory cells and gates, LSTMs selectively retain relevant information while discarding noise, making them particularly effective for time series forecasting tasks.

DLSTM enhances this foundation by incorporating time-series decomposition mechanisms, drawing inspiration from models like Autoformer. This augmentation captures both short and long-term dependencies in highly volatile markets by processing a trend and residual decomposition of the data in parallel LSTM modules. The architecture proves especially suitable for digital asset markets, efficiently extracting meaningful patterns from high-dimensional, noisy data.

1.3 Proprietary Model Enhancements

Decomposition Layer: The decomposition layer divides data into trend and residual components, improving the model’s adaptability to varying market conditions. This decomposition enhances signal accuracy, helping the model distinguish between long-term trends and short-term market noise.

Parallel LSTM Modules: Separate LSTM modules prioritize specific time scales, improving predictive consistency.

Optimization: To optimize the signal-to-noise ratio, the Yo-Yo team fine-tuned the model to minimize overreactions to noise and improve generalization in high-frequency data environments.

PreviousOverviewNextModel Output

Last updated 2 months ago