Technical
Consistent Prediction Accuracy Across Time Interval
The Yo-Yo model achieves sustained accuracy of 75% across intervals from 10 seconds to 15 minutes, a critic```al improvement over traditional forecasting systems that often lose reliability over extended periods. Traditional forecasting models struggle with consistency over extended periods, however, Yo-Yo’s model maintains accuracy fluctuations even under volatil
Proprietary Neural Architecture & Resilience
Yo-Yo’s proprietary neural network maintains robustness immediately after training, minimizing data overfitting and leakage. It also demonstrates generalizability, maintaining accuracy when exposed to new, previously unseen price movements. Yo-Yo’s prediction model implements a proprietary Decomposed Long Short-Term Memory (DLSTM) network, an enhanced variant of traditional LSTM architecture. LSTMs are specialized recurrent neural networks (RNNs) engineered to capture long-term dependencies in sequential data by addressing the vanishing gradient problem inherent in standard RNNs. Through carefully designed memory cells and gates, LSTMs selectively retain relevant information while discarding noise, making them particularly effective for time series forecasting tasks.
DLSTM enhances this foundation by incorporating time-series decomposition mechanisms, drawing inspiration from models like Autoformer. This augmentation captures both short and long-term dependencies in highly volatile markets by processing a trend and residual decomposition of the data in parallel LSTM modules. The architecture proves especially suitable for digital asset markets, efficiently extracting meaningful patterns from high- dimensional, noisy data.
Proprietary Model Enhancements
Decomposition Layer: The decomposition layer divides data into trend and residual components, improving the model’s adaptability to varying market conditions. This decomposition enhances signal accuracy, helping the model distinguish between long-term trends and short-term market noise.
Parallel LSTM Modules: Separate LSTM modules prioritize specific time scales, improving predictive consistency.
Optimization: To optimize the signal-to-noise ratio, the Yo-Yo team fine-tuned the model to minimize overreactions to noise and improve generalization in high-frequency data environments.
Model Output
Yo-Yo’s prediction model outputs a probability distribution across three potential classes: upward, downward, or stationary; offering a nuanced assessment of market direction rather than a simple categorical prediction. It presents a probabilistic breakdown via a confidence measure, giving users insights into the likelihood of each class.
Entropy-Based Confidence Measure
Entropy is used to assess the prediction’s confidence level. Lower entropy signals a higher confidence level, while higher entropy suggests a less certain forecast, giving users an objective measure of forecast reliability. By delivering a predicted price movement and its associated confidence level, Yo-Yo’s model supports more informed trading decisions, integrating quantitative insights on both expected movement and prediction reliability.
Last updated