#006 | Predictable vs Unpredictable Projects

Hi, I’m Ryan! You are receiving this because you have signed up to my weekly newsletter for Natural Language Processing (#NLP365), Entrepreneurship, and Life Design content!

Hey friends,

Since my PhD started back in October 2020, I have been experimenting with different methods on how to best manage my energy and progress and although I am still early in the process, I have developed some initial insights that I hope would be helpful to you guys 😊

Predictable vs Unpredictable

Time blocking is the simple act of grouping your time into different blocks, where each time block is designed to focus on getting a specific task done. Now, time blocking is great if you have never tried it before as it lets you get “in the zone” and not have to context switch that often throughout the day. However, it doesn’t always work; it depends on whether the task or project you are working on is predictable or unpredictable.

Predictable tasks are tasks like reading or working out. Unpredictable tasks are tasks like debugging codes or finding research gaps for publications. Unpredictable tasks have two characteristics:

  1. Time and effort does not guarantee progress

  2. It takes a long time for you to get “in the zone” or “in the context”

Here’s what I mean. Imagine, you are trying to debug 500 lines of codes across 5 different files. You spent two 2-hour time block on it but no progress was made. You moved on to a different tasks to take a break from debugging. The next day when you go back to debugging your codes, how long do you think it would take you to get back into context? A freaking long time!

In the ideal world, you should not break away from debugging and focus on it until you finish the task but in the real world, you have external deadlines and meetings that forces you to switch between tasks and projects.

Look at your to-do list now, how many tasks are predictable and how many are unpredictable? When most of your tasks are predictable, time management using time blocking works very well. When most of your tasks are unpredictable, time management is “useless”.

So how do we fight against unpredictable tasks? The answer is context management.

To keep this newsletter short, I will cover context management in next week’s newsletter :)

This week I finished reading:

  1. Limitless (8th Feb - 11th Feb 2021)

  2. Zero To One (13th Feb - 15th Feb 2021)

I have also completed my FIRST level 4 notes! 🔥

Total: 13 / 26 books | 1 / 26 level 4 notes | 0 / 12 actions

❓Question of the Week

How many tasks on your to-do list are predictable? How many are unpredictable? What methods are you using to deal with context switching?

And I don’t mean context switching in terms of distractions! I mean context switching where you are handling multiple projects throughout the day and each project requires you to have a different mindset / skillset.

Share with me your thoughts by replying to this email 👻 👻 👻

🐦 Tweet of the Week

💡 Quote of the Week

Often our greatest struggles lead to our greatest strengths — Limitless

🔥 Recommendation(s) of the Week

Create a personal CRM — I can’t think of anything tech / product this week to recommend. I thought this section can also be about recommendations in general and so this week, I am recommending you to build your own simple CRM to remind you to connect and stay in touch with your family and close friends! I have built one since April 2020 and it has worked out amazingly well as it reminds me to connect with people! It’s an high ROI tool :)

🔦 AI Research Paper(s) Spotlight of the Week

The paper is proposing KinGDOM (Knowledge Guided Domain adaptation), a two step domain-adversarial framework that uses commonsense KB for unsupervised domain adaptation:

  1. Train a shared graph autoencoder using GCN on ConceptNet to a) learn inter-domain conceptual links and b) learn domain-invariant concept representations

  2. Extracts document-specific sub-graph embeddings and feed them to a popular domain-adversarial neural network (DANN). We also train a shared autoencoder on the extracted graph embeddings to further capture domain-invariance

The main contributions of this paper:

  1. The first to pre-train a language model (BERT) using English Tweets

  2. Outperformed previous SOTA models on three downstream tasks: POS tagging, NER, and Text classification

  3. Released BERTweet, which can be used with fairseq and transformers

I am excited to test out this BERTweet next week! 👻

🎥 This Week on YouTube

That’s it for this week! I hope you find something useful from this newsletter. More to come next Sunday! Have a good week ahead! 🎮

More of me on YouTubeTwitterLinkedIn, and Instagram.