Data Scientists are Predicting Cryptocurrency Prices With Deep Learning

A data specialist at India's prestigious Vellore Institute of Technology has developed a method for predicting real-time Cryptocurrency prices using a Long-Short Term Memory (LSTM) Neural Network.

02 December, 2019 | AtoZ Markets – Researcher Abhinav Sagar demonstrated a four-step process of how to use machine learning technology to predict prices in a sector. He claims that it is to be "relatively unpredictable" in comparison to traditional markets.

Predicting Cryptocurrency Prices

The popularity of cryptocurrencies soared in 2017 due to several consecutive months of exponential growth in market capitalization. Prices rose at more than $ 800 billion in January 2018.

Mr. Sagar began his demonstration by noting that machine learning has been able to predict stock prices using a multitude of time series models. And its application to predicting cryptocurrency prices has been very restrictive. The reason for this is obvious because cryptocurrency prices depend on many factors. Those are technological progress, internal competition, market pressure, economic problems, security issues, political factors, and so on.

Their high volatility leads to great potential for high profit if smart invention strategies are taken. Unfortunately, because of their lack of clues, cryptocurrencies are relatively unpredictable compared to traditional financial forecasts, such as stock forecasts.

According to Sagar, there are four steps to predict cryptocurrency prices:

  1. Getting cryptocurrency data in real-time.
  2. Prepare data for training and testing.
  3. Predict the price of the cryptocurrency using the LSTM neural network.
  4. Visualize the results of the prediction.


Read More: US Lawmakers Consider Stablecoins as a Security

Researcher Sager's CryptoCompare Dataset

To form its network, Sagar used a CryptoCompare dataset, using features such as price, volume, and open, high, and low values.

He said, "I have divided the data into two sets: a training set and a test set containing 80% and 20% of data, respectively. However, the decision here is for this tutorial only. In real projects, you always have to divide your data into training, validation, tests (like 60%, 20%, 20%). "

He provides the full project code on GitHub. Also, he describes the functions it has used to standardize data values in preparation for machine learning.

Before plotting and visualizing the results of the network predictions, Sagar notes that he used the Mean Absolute Error as an evaluation measure. However, that measures the average magnitude of errors in a set of predictions without considering their direction.

Beyond market forecasts, the convergence of new decentralized technologies such as the blockchain with machine learning has grown in ever more popularity.

Think we missed something? Let us know in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *