- The Limitless Playbook
- Posts
- #015 | The OneHundredBook Index
#015 | The OneHundredBook Index
Hi, I’m Ryan! You are receiving this because you have signed up to my weekly newsletter for Natural Language Processing (#NLP365), Entrepreneurship, and Life Design content!
Hey friends,
This year I picked up the habit of reading. 15 weeks into 2021 and 32 books later, here’s what I have noticed so far on learning from books:
Exploration vs Exploitation — You might not remember most of what you have read but that’s okay. Books expand your mind and mental box. When it comes to ideas generation and connecting the dots, reading more and exploring different concepts will help you generate better ideas and help you learn faster! This, however, it’s hard to measure because results are more subtle but nonetheless, it’s extremely useful! 🥊
Build a strong knowledge base (mentally or systematically) — When you come across new ideas / knowledge, there’s an understanding percentage that determines how much you can internalise the new knowledge. The more you can resonate and understand the new idea / knowledge, the more likely you will remember it and be able to internalise it. To increase your understanding percentage, you need to establish a strong foundational knowledge base for your new knowledge to fall into. A potential strategy is to deeply understand few books / areas so that when you come across new similar ideas, you can relate to them using your strong knowledge base
Reading isn’t difficult, taking action is — Most people, including myself, struggles with reading books. Last year I read a total of 4 BOOKS! What changed this year was my underlying motivation to read and daily routine. At the end of the day, taking action on what you have learned is the most important driver of results. Reading / learning is the motion to get to actions.
Most books are fluffs with 1 - 3 key points — Nothing needs to be said here. With practice, you will learn to cut through the fluffs and extract key information.
The OneHundredBook (OHB) Index
I would rather read the best 100 books over and over again until I absorb them rather than read all the books — Naval Ravikant
As I read through many books this year, I really resonate and buy into Naval Ravikant’s approach to books. You rather read the best 100 books over and over again. Each time you reread the same book, your foundational knowledge base would be different and therefore the lessons learned will also be different! Here’s a simple equation on this:
Lessons Learned = Lessons in the Book + Your Knowledge Base
With that said, I want to build my own library of 100 books! I call this the OneHundredBook (OHB) Index (I know I know… creative isn’t my strong suit).
First, I would need to read at least 100 books first! Assuming only 10% of the books I read will go on the OneHundrerBook (OHB) Index, I would need to read 1000 books! 🎮
And to be honest, 10% is still pretty high but I think it’s a good starting point. Once I reach 100 books, then just like a stock market index, I can start to replace my top 100 books over time as I discover “higher quality” books to read and books that most resonate with me based on my experience and personal situations.
Once I build the page, I will share more with you guys! 👾
This week I finished reading:
Actionable Gamification (11th Apr - 14th Apr 2021)
The Psychology of Money (15th Apr - 16th Apr 2021)
Thinking in Bets (17th Apr - 18th Apr 2021)
Total: 32 / 26 books | 3 / 26 level 4 notes | 2 / 12 actions
❓Question of the Week
What’s your top 1 - 3 books to read?
Share your thoughts by replying to this email. I would love to hear from you! 👻 👻 👻
🐦 Tweet of the Week
The journey to a thousand books begins with a single page.
— Alex & Books 📚 (@AlexAndBooks_)
2:01 PM • Apr 17, 2021
💡 Quote of the Week
Knowing what the bottlenecks will be can help you start to think of ways of making your study time more efficient and effective, as well as avoid tools that probably won’t be too helpful to your goal — Ultralearning
🔥 Recommendation(s) of the Week
This month, Zeroton’s monthly habits challenge is to READ CONSISTENTLY. We want people to build the habits of reading! We are 2.5 weeks into April 2021 and here’s how our group is doing so far:
Zeroton’s mission is to encourage and enable people to take more actions. As of 16th April, as a group, we have read a total of 6915 MINUTES!! Our target is to hit 10000 reading minutes by end of the month and we are on track!
If you are interested in joining this challenge with us (it’s never too late), please join Zeroton’s Slack Channel here.
🔦 AI Research - Nested NER
NER as a sequence labelling task only captures non-nested (flat) entities as it assumes entity mentions don't overlap or nested with each other. Existing NER models are still struggling with nested NER, mainly due to lack of standardised datasets and proper methodologies. Nested NER is when a named entity contains another named entity. For example, Bank of England contains both the entity "Bank of England" and "England".
Bipartite Flat-Graph Network for Nested Named Entity Recognition proposed a bipartite flat-graph network for nested NER, which has two submodules: a flat NER module to capture outermost entities and an entity graph module to capture inner entities. The new representation learned from graph module are fed back to the flat NER module to improve the outermost entity predictions.
Named Entity Recognition as Dependency Parsing uses a biaffine model to give our NER model a global view on input text so that they can explore all the spans and predict BOTH flat and nested NER accurately. They reformulate the NER task from sequence labelling to identifying start and end indices and assigning a category to the entity span. The system achieved SOTA results on all three nested NER datasets and increase the performance of five flat NER datasets by 2.2%
An alternative approach to biaffine model is the layered model. Pyramid: A Layered Model for Nested Named Entity Recognition proposed the Pyramid layered model that consists of a stack of inter-connected flat NER layers. Each flat NER layer predicts whether a text region is a complete entity mention or not. The layers are inter-connected as the embedding are pass from lower decoding layer to higher decoding layer. This means that the accuracy and output of higher decoding layer relies on lower decoding layer. This Pyramid layered model alleviates the layer disorientation and error propagation issue.
🎥 This Week on YouTube
That’s it for this week! I hope you find something useful from this newsletter. More to come next Sunday! Have a good week ahead! 🎮