Back in June I embarked on a project that was aimed at answering a seemingly simple conundrum:
How do I know if it is worth holding onto an investment?
As someone who is not a technical analysis trader the above question led me to the the more generalized question: "Is it possible to analyse a project/holding based on a combination of DOYR and personal investment status?". Over the following weeks/months I decided to attempt to condense information from multiple sources in order to build a structured series of questions/answers that could repeated for other projects. If you're interested in the thought process I followed for this initiative please head back over to the beginning to read:
- Hodl, Sell or Consolidate? | Part 1
- Hodl, Sell or Consolidate? | Part 2
- Hodl, Sell or Consolidate? | Part 3
- Hodl, Sell or Consolidate? | Part 4
Alternatively, if you're just keen on reviewing the cliff notes (questions and where to get answers) take a look at the summary article below:
During the development of this process I used a small holding of mine as a test-case. Applying the information I found from various sources and scoring each element of the project on a scale of 1-5 (where 1 was the least favourable, 5 was the most favourable). In this article I'm going to dig a little deeper into this scoring and set out a few example methods of reviewing/analysing the results in order to answer the initial question.
If you have some specialized training in this type of analysis (specifically with questionnaire-like results) and you spot concerns with the assumptions I've made please do point these out. I'm aware that this process is quest subjective in places and the significance of the findings is going to need to come with a health pinch of salt.
So as mentioned in the introduction while developing this framework I used a small holding of mine (<$5, Electroneum:ETN) to test out the process. If you followed the project from the beginning you'll be aware there were 4 key areas of analysis, which in turn were broken down further into subsections. For each of these subsections a score was applied and the average score was taken for the main section.
Note here the most simplistic approach toward analyses had been taken an average of the scores for each section which in turn, when combined, gives the the final score: 2.51 (Unfavourable).
While it would be easy to just stop here and say "Okay, we've got a score", dust our hands off and call it a day. I think that this doesn't quite cover the nuances of the scoring. Indeed, while reviewing the individual scores we can see there is a mixture, with quite a few sitting at the lower end of the range and then one also peaking to the high end. Remember that we're looking into using this an indicator to help determine if the there is a trend toward Hodl, Selling or Consolidating. So, instead of just scoring them alone, it was important to consider what the score 'means' for each category. With that in mind I also reviewed each category to determine if a high or low score would be more indicative of a specific category: Hodl, Sell or Consolidate.
As we saw in the previous section the initial results and categorization (though appearing to give a general picture) didn't quite give us confidence in the results. While I had previously had in mind the derivation of a singular score or sliding scale, looking at the initial results this seemed inadequate. Therefore, in the section we'll explore the data a little more, presenting it in a few different ways and trying to draw some clearer conclusions from the data.
Before re-analyzing the data I'm sure there is one thing that jumps out to you from the previous table. There is absolutely no difference in the categorization of Sell or Consolidate, which kinda makes sense right? If we're Selling for Fiat or for another Crypto the end result is generally the same we're removing X from the portfolio and replacing it with Y (where Y = BTC, ETH, $, etc.). As such the first refinement of this methodology is this to break down the decision making as follows: Hodl OR Sell/Consolidate OR Unclear, where unclear represents a borderline result for which there is no clear distinction between the other two (i.e. a result of 3 in the initial grading).
The first thing we can do when reviewing data is visualize it. So I decided to count up all the results that fit into each category and present them by frequency and again as a percentage of all of the results.
As you can see here the data offers us a little bit more information, we can see of the 18 subsections initially analysed 7 of them lean toward Sell/Consolidate, 6 to Hodl and 5 to Unclear. My concern with this initial approach is that we're again looking at a macro level and not considering some of the details. Things like, how many of the Hodl results were a "Strong Hodl" (i.e. score of 5/1 depending on the category), the same goes for "Strong Sell/Consolidate" signals, seemed important to capture.
Going back to the initial data again we can break it down a little further this can give us a better picture of the distribution of the results and account for the stronger signals.
As you can see from these result there is definitely something going on with the stronger sell signals. Indeed, we've as many strong results for this category as we do have for the Hodl category. However, we're now up to a more graded approach where we're not considering our three main categories: Hodl OR Sell/Consolidate OR Unclear.
Weighting the Data:
To address the capture of the stronger signals and corporate them into the scoring I decided weighting could be applied to the frequency count for these. That is to say, rather than just counting each occurrence once we can apply a count of two for the stronger signals. So we end up with something like this:
- Strong Hodl = 2
- Hodl = 1
- Unclear = 1
- Sell/Consolidate = 1
- Strong Sell/Consolidate = 2
This would mean that for a Strong Hodl signal we'd need at least two Unclear/Sell signals to counter balance it when comparing results.
The result, from weighting the scores, as you can see is reduce the impact of lower/unclear scores on in the categorization, allowing us to see a little clear the separation of the two main categories in the context of the stronger signals. I'll admit that I'm sure this methodology isn't perfect but it does seem to help to consider the whole data and is better in do so than a simple average scoring.
Finally, in order to give a clearer visualization I've found the radar chart provides the best platform for conveying the summary information. As you can see from the image below the trend toward the Sell/Consolidate point is clear.
At a macro level I think this current process works. Particularly if we're keen to plug in results and get out out a simple answer. However, another item I am interested in is the by-section breakdown. While I'm not going to go into this too much here this information could help to separate out 'outlier' section, for example if you look at my original results the 'Active Participation' section has the potential to skew the remainder of the results in one direction due the higher concentration of the lower scores. As a result I believe applying a similar approach to each subsection as to the overall process could be done to achieve something long these lines:
I intend to dig a little further into this idea as I also believe that when comparing one section against another we also need to consider the total number of results and account for this as well. Definitely, something to think on going forward.
At this stage you may be thinking, "Wow that was a lot of work for <$5 of holdings" (haha, you're not wrong!!). However, I think it is important to remember the goal here was to build a framework which, in a way, we've done with the list of questions for doing your own research (DYOR). However, ideally I'd also like this information to be useful for repeating this process and I also don't want to have to manually do this each time. This has led me to the next phase of this process 'Automation'.
Those of you that follow me are probably aware that I've got a website (GitHub Pages) where I post my blog articles. It is through this platform that I want to consolidate this information and share it so that anyone can use it. As such I've developed a rough plan for what the next stages look like:
If you're interested in looking at a very early version of my ideas for the site you'll find it here: https://mynima.github.io/hsc_rating/
Well it feels good to get around to completing the first leg of this process. I spent the whole day programming yesterday and got somewhere with the site and can say I'm thoroughly enjoying learning a languages as a result. In general I hope that this process/scoring system does seem relevant to folks reading this article. As I said in the beginning I'm not a trader and am terrible with technical analysis, so this method (in my mind) allowed an alternative approach to reviewing your holdings as it is based, largely, on making the most of the information that is available to everyone. Plus it takes into account your own situation/opinions.
The next (and key) goal will be to make it reusable. No-one wants to have to sit down and calculate these scores each time an airdrop/project comes along that you want to review. In the meantime though, if you have any ideas/thought/suggestions please feel free to share them.
Finally, please remember that if you're using this methodology do not consider it financial advice. You are, and always will be, ultimately responsible for making decisions about your finances. As with all tools this has the potential to not be fit for all situations, take that pinch of salt and make sure you've done your research before making decisions, it will save you a headache in the long run.
Hope you all enjoyed the article and find it useful, stay safe out there, good luck y'all!
If you liked this and want to read more check out my back catalogue here: