Connected / Disconnected

(The cartoon is unrelated.)

Last month, I attended a networking dinner to discuss how Generative AI is reshaping coding practices. Around 50 people were at the event, including tech leads and department heads from Google, Pfizer, UBS, and other companies across various industries.

While the host company was (of course) promoting their own AI solutions1, they also encouraged open dialogue through panel sessions and Q&As. One of the main topics of the night was on vibe coding – using LLM prompts to generate and refine code – and what it could mean for the future of software engineering.

Some attendees were optimistic. A cloud engineer from Google cited Sundar Pichai’s remark that 30% of new code at his company is now written with AI assistance2. The host company chimed in that LLMs aren’t just generating code but also tying development work back to broader software-development lifecycles (hint: story points that write themselves)3. What’s more, they predicted that within the next five years, LLMs will be advanced enough to understand entire ecosystems of programs and generate code that can be integrated seamlessly across codebases. How amazing would that be!

But others voiced concerns. One software engineer reflected on his multi-decade career of building experience from making mistakes and iterative learning4. “How will vibe coding ensure that future tech leaders can understand and navigate the intricacies of programming, of the nuances between Java and Python, and everything else, if AI is going to figure it all out for us?”

The Google engineer responded, “Well, I think in the future, you won’t need to know about Java versus Python. Instead, you’ll need to be really good at prompting and interacting with LLMs5. That’s where coding and engineering are headed — and vibe coding is just scratching the surface.”

Personally, I am skeptical, maybe even cynical.

As technology grows more complex, AI may indeed be the most sensible way to keep everything organized. This is true not just of software engineering, but across many areas of life: financial markets, medical research, communication networks. Each of these industries has grown into multi-layered webs of models, methods, and frameworks – with new tools and approaches emerging so quickly that it seems impossible for any single individual or organization to keep up. Will AI be the best way to help us make sense of these expanding systems, or will it leave us more disconnected because we’re relying on AI to solve everything?

But actually, do we even need to make sense of everything if AI can do it for us?

Back to the dinner event. While vibe coding might sound like fun Gen Z slang, the approach carries real implications in a world where technology continues to grow faster than any one person can handle. And if this doesn’t work, AI-generated components may end up looking as unrelated to the engineer’s eye as the cartoon is to this article.

  1. At work, I receive invitations to ‘exclusive tech roundtables’ or ‘AI leadership summits’ from various companies. These events are part networking, part marketing for the host company’s products and services. Selfishly, I find them a great way to learn new ideas from others! ↩︎
  2. As Google’s CEO Sundar Pichai said during its Q1 2025 earnings call: “I think the last time I had said the number was like 25% of code that’s checked in involves people accepting AI suggested solutions. That number is well over 30% now.” Source: Seeking Alpha ↩︎
  3. ’ve really come to enjoy the genre of memes poking fun at the planning process for software development (SDLC) and its references to “agile”, “scrum”, “ceremonies”, and the classic, “How many story points is that?”. See Instagram example here. ↩︎
  4. A related term here is yak shaving, referring to the seemingly endless tasks one must complete before they can even start coding (e.g., ten minutes to install an IDE, ten days to figure out why it won’t run on your laptop). It is often considered a rite of passage for people entering the developer or engineering world. ↩︎
  5. Research papers such as this one remind me of how deeply technology and human language are intertwined. It will be interesting to see how prompt engineering might shape everyday speech in the future. ↩︎

Connected / Disconnected

Go, Above and Beyond

Imagine you are at a dinner party with friends. Plates of roasted meats and greens are being passed around and glasses clinked for cheers. It’s an evening of hearty chatter and shared laughter.

What kinds of topics would you bring up at the dining table? Perhaps you’d enchant everyone with tales of your latest travels or start a deep conversation on the poetry of the cosmos. If you are comfortable with storytelling, there could be an endless list of memories and ideas to choose from.

Unfortunately for my dinner audience, however, I would want to talk about something that is perhaps at the bottom of that endless list: telling friends about an ancient board game in the hopes that they’d pick it up and play it with me.

That ancient board game is called Go, as it is known in Japan and in the Western world, or weiqi (围棋) in China and baduk (바둑) in Korea. But rather than bore everyone with explaining the rules of the game1, I’d instead tell you why I have enjoyed playing Go so much.

And so, this is really just a draft for me to practice for my future, not-hypothetical dinner party. Ahem, clink clink, please hear me out.

* * * * *

It Is Analytical: I started playing Go around the same time as when I started watching the Korean drama series, Reply 1988. In the show, one of the main characters is a teenager who grew up as a Go prodigy. Throughout the series, he wins multiple international Go tournaments but also suffers from headaches, on account of having to use his genius brain so much.

But as much as I tried at the time, no amount of headaches got me to become a better Go player.

Kidding aside, part of why I enjoy Go is because it involves applying analytical thinking and visualising spatial relationships. Similar to chess, each move comes with further possibilities as you try to calculate potential outcomes. Since Go is scored by a points system, you can also continually incorporate the different outcomes into estimating whether you or your opponent would be up in score in the next ten, twenty moves.

This is a mental exercise that can be both fun and challenging, and – I promise – mostly headache-less.

* * * * *

It Is Emotional: One of my first losses in Go was losing by half a point after I made a blunder in the endgame. In that moment, I felt much like this 19th century Japanese woodblock print by Utagawa Kuniyoshi. I was surprised I could feel so upset over such an abstract game.

Thinking back on it, I know now why playing Go can be such an emotional experience; it’s because you’re forced to confront yourself throughout the entire game. To put it in concrete terms, when you place your pieces (“stones”) on the board, you cannot move them and generally do not remove them either.2 And so, as the stones fill the board over the course of a game, you begin to see your decisions take shape as realised successes from a larger strategy or – worse – as mistakes you hadn’t anticipated.

And unlike chess where you generally play to checkmate, Go relies on scoring to determine the winner of the game.3 This means you can play the entire game thinking you are ahead by, only to realise you had actually miscalculated the score. On the flip side, you might feel elated when you’ve discovered you were ahead the entire game.

These are, of course, extreme examples. What I really want to say is simply that Go gets you in touch with your emotions just as much as it does the logical/analytical sides. It can offer a lesson on hubris and self-doubt, and excitement and despair. Occasionally, you’ll also learn to suppress that urge to flip the board in frustration.

* * * * *

It Is Technology: It was a friend from university who got me to play Go. One day she told me she had watched the AlphaGo Documentary from DeepMind and suggested that I check it out.

At the time, I only vaguely knew about the board game and was more fascinated with the technologies discussed in the film. The lead engineers spoke of how creating software that can play profession-level Go represented a milestone for advancing artificial intelligence. To achieve this, they developed deep neural network systems enabling their program – named AlphaGo – to mimic the “intuition” a human player might display when playing Go.4

DeepMind’s focus on Go was part of a larger story about the evolution of machine learning techniques and its implications for society as a whole. For example, in recent years, the research lab’s success with its AlphaFold software made headlines for its ability to predict protein structures, which has helped aid researchers with further exploring rare diseases and accelerating drug discovery.5

Imagine that, an ancient board game helping to advance artificial intelligence! This was the idea that led me to try out Go.

* * * * *

It Is Human: Still, it was another story in the DeepMind documentary that captivated me the most. It’s the story of Korean professional Go player Lee Sedol playing against AlphaGo. The match was a best-of-five series, with DeepMind looking to test its creation against one of the best players in the history of Go.

There are plenty of write-ups on the match itself, such as this piece from Wired. Go professionals from around the world commented on AlphaGo’s seemingly invincible ability and remarked on the program’s ability to come up with moves that broke conventional wisdom in Go strategy.6 But just when utter defeat seemed inevitable, Lee Sedol mesmerized the Go community by playing a “God’s Move” that led to him winning Game 4 against AlphaGo.7 To date, that is the only game any human player has ever won against any version of AlphaGo.

In that single move, Lee Sedol showed that humans could go above simple, calculative analysis, move past the emotions of having lost three games in a row, face up against a seemingly unbeatable innovation, and muster the creativity to prevail. It was an inspiring moment for celebrating the imagination that humans possess to overcome adversity.

Imagine going home to say you were able to beat AlphaGo.

Now, wouldn’t that be an incredible story to tell at a dinner party?

Footnotes:

  1. For those who do want a quick explanation of Go: It is a two-player game played on a gridded board (generally 19×19) with black and white “stones”. The players take turns to place their stones on the intersections of the grid, with the aim of the game to control more of the board (known as territory) than your opponent. This objective can be achieved through a combination of surrounding empty space and/or capturing your opponent’s stones. In fact, the Chinese name of 围棋 literally translates to “to encircle” (围) and “chess or chess-like game” (棋).
  2. Stones do get removed from the board when they’ve been “captured”. See https://senseis.xmp.net/?RuleOfCapture for a visual description.
  3. To elaborate: A Go game ends when both players “pass” on their turn, after which they will count their territory to see who has more points. Conversely, similar to chess, a player can also resign (which counts as a loss).
  4. Go technically has more possible game combinations (at 2×10170) than all the atoms in the observable universe (1080), so it would be impossible for a computer program to run through all the possible scenarios to identify the most optimal move. As such, DeepMind engineers sought to mimic how a human might intuitively start from a much smaller set of “best moves” to choose from. See DeepMind’s research papers for maths applied in this approach: https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf and https://arxiv.org/pdf/1812.06855v1.pdf.
  5. While AlphaFold does not use the exact same ML techniques as AlphaGo, it still represents DeepMind’s progression in using artificial intelligence to tackle ever-more-complex problems. See their blog on AlphaFold’s journey: https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe.
  6. See AlphaGo’s “move 37” in Game 2: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_2.
  7. See Lee Sedol’s “move 78” in Game 4: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_4.
  8. About the artwork: The image at the top shows the first 21 moves of a friendly online tournament I played in (for fun) last year!
  9. About the title: I apologise for the cheesiness of the title 🙂
Go, Above and Beyond

Learning One Language Through Another

Welcome to the first of a two-part series, ‘Creations from Twenty-Twenty’, which focuses on two mini-projects I worked on during last year’s lockdown. Hope you enjoy it!

* * * * *

This is a story about languages. But not quite.

Having moved to a foreign country at a young age, I never properly learned my mother tongue. While I grew up speaking Cantonese at home and occasionally practiced the Mandarin I had picked up from watching television shows, I never learned to read or write Chinese. And outside of one childhood summer when my mom attempted to teach me (lessons which I promptly forgot once school started), I never made another proper attempt.

Fast forward to last year. I purchased a textbook filled with Chinese characters1, and sat down with a notebook to learn and practice writing the characters one by one. Besides simply memorizing new characters each day, I also read about Chinese radicals and character classifications – for example, compound ideographs (會意) versus phono-semantic compounds (形聲).2 This helped me better understand how to recognize characters as well as learn the correct order of strokes for writing each one.

Soon, the pages of graph paper in my notebook began to fill up with repeated copying of characters from the textbook. However, this progress also came with two problems.

The first was on testing myself. I was slowly increasing my mental repository of Chinese characters and their meanings, but how do I check that I’ve retained this knowledge over time? What’s to say I wouldn’t forget the first twenty characters when I move onto practicing the next twenty? I also had to consider the different angles to developing recognition. Could I successfully write the characters based on pinyin or English translations, and vice versa?3

The second problem was that, like all learning materials, there were limitations to the textbook I was using. In this case, while many of the characters included could be useful in everyday conversation, some seemed less relevant (Do I really need to be copying down 巫, for “shaman”?). The textbook also did not cover components such as prepositions (What is the Chinese character for “of”?), which I needed to learn in order to move into reading and writing Chinese in full sentences.

So how could I solve these two problems?

* * * * *

Enter: Python, one language to help me learn another language.

When it came to assessments, I had initially tried self-administering “tests” by writing down all the characters I knew on scrap paper and then comparing them to what I had repetitiously practiced in my notebook. But this quickly became more of a memory exercise than actually evaluating my familiarity with Chinese. It also felt wasteful to add to a stack of paper each time I made up a new quiz.

So, I decided to write a program that could test my knowledge through multiple methods. The program had a simple objective: create a randomized list of words by pinyin and/or English translations, which I would then use to practice recalling Chinese characters by writing them on my tablet.4

I named the program, scramble_chinese. Sometimes it reminds me of breakfast.

But before it could output a list of words, scramble_chinese had to “know” the dataset of Chinese characters that I had already studied. To accomplish this, I typed the contents of my physical notebook into a Google Sheet, which my program could then access via Google APIs.5

This was the most manual part for my testing process, and continues to be even today as I learn new characters. However, I ended up finding the Google Sheet quite useful for keeping track of what I had studied, especially as list of characters grew. After all, it is quicker to search a virtual document than to sift through sheets of paper in a notebook.

Once I connected scramble_chinese to the Google Sheet, I added user options to make the program more customizable. One setting dictated how many characters I wanted to be tested on (ten characters for a “pop quiz” and fifty for an “exam”). Another switched the test output between English translation, pinyin, and Chinese characters for different types of assessments.

Now that I had a more automated approach for testing myself, I moved on to solving the second question of expanding my Chinese beyond the textbook.

* * * * *

When I first started learning to write Chinese, I read somewhere that a person needs to know roughly three thousand characters to be able to read a Chinese newspaper.6 Since my textbook contains only several hundred, I thought perhaps news articles could be the next source for learning new characters.

I started by writing a program to read any given Chinese article from a website – let’s say, the Chinese version of Reuters. The program then identifies and displays the most repeated characters used in the article.

I named this program, parse_article, for lack-of-creativity reasons.

The first part of coding parse_article was figuring out the html structure of different news posts to make sure the program scanned only the actual article rather than the side headlines and advertisements that may show up on the webpage. And because this was focused on Chinese, the program splits up the article’s text into a long list of individual characters.7

Next, I focused on removing alphanumerics from the long list so that the program displayed only Chinese characters. While this was simple for letters and numbers, I have to admit that in the end I included a line of code specifically removing a hardcoded list of different variations of commas, quotation marks, and periods.

Finally, I limited parse_article’s output to the top fifty most common characters in an article, figuring this would capture the ones I needed to learn to build up basic reading comprehension and writing abilities.

When the program was completed, I held my breath and passed in news articles to see what the results would look like. As an example, a recent BBC post on Mars exploration (in Chinese) yielded the following top-ten results of Chinese characters and its frequency in the article:

的 128 (Of)
们 59 (Suffix, plural marker)
生 58 (To be born; birth)
在 54 (At)
物 46 (Thing; object)
星 46 (Star)
一 35 (One)
有 34 (To have; to be)
我 33 (I; me; my)
可 30 (Can; able to)

Success! All of these characters are quite common in everyday speech. Some may not be as useful by itself (how often would I read about stars, 星?) but could be more relevant as parts of words (say, if I want to read about celebrities, 明星…for the gossip of course).8

I was happy to have completed the programs and could continue with my Chinese learnings. For those who are interested, here are the Github links for scramble_chinese and parse_article. A quick note: I am as much an amateur in coding/Python as I am in Chinese, so any suggestions (on either language) would be much appreciated!

* * * * *

After the few days spent on writing these two short scripts, the natural question I had to ask myself was, “Does this help me learn Chinese more easily?”

The short answer: Yes. But not quite.

I joked with friends and colleagues that I probably learned more Python than Chinese from this mini-project. And in the months since, my self-planned Chinese lessons have faltered and slowed after progressing to several hundred characters. I put this down to negligence and distractions, both of which cannot be solved by my code.

Still, from time to time, I do pick up my notebook and practice writing new characters, most of which I pick up from parse_article and then test myself via scramble_chinese. I also think about how I could improve these programs. For instance, it could help to add some basic natural language processing (NLP) techniques to split a news article’s text and capture common multi-character phrases or words.9

Who knows? Perhaps by writing more Python, I might just end up learning some Chinese.

Footnotes:

  1. I had gotten the Chineasy Everyday book, which is quite fun in that it is very visual by turning every character into a picture. However, I feel this gives the unintended impression that all characters are pictographs (which they are not; see below). I should also note this textbook does not follow a lesson-by-lesson structure and instead functions more like a reference book.
  2. There are several Chinese character classifications, with phono-semantic compounds being the most common. These characters typically consists of two or more building blocks, one to indicate the sound and another to indicate the meaning.
  3. Here, I am referring to Hanyu pinyin, which is the romanticization system for Mandarin.
  4. I should note that I’m really fortunate to have a tablet and this is of course not needed! That said, I am also not sure why I didn’t just start with makeshift quizzes on my tablet in the first place…
  5. How lucky we are to be learning programming in an age of limitless resources! The Google Sheets API documentation was super easy to follow and use.
  6. Of course, reading a newspaper also requires contextual knowledge of a language and the culture involved.
  7. This means that, technically, parse_article could be used for Korean or Japanese new articles too!
  8. In Chinese, “celebrity” can be written as “明星”, which literally translates into “bright star”.
  9. In natural language processing, this technique is known as tokenization. I also found a library for segmenting Chinese, called jieba (which is pinyin for 结巴, meaning “to stutter”, haha).
Learning One Language Through Another

Running into 2084

fullsizeoutput_7c9

Here is a story for another time. It begins a few months earlier and ventures into the year 2084.

Filmmaker Casey Neistat once remarked that sleep can be replaced with exercise. Instead of sleeping 8 hours, he suggested waking up 2 hours earlier to go running. I prefer to phrase it differently: running can make you feel better when you are sleep-deprived. So on the weeks when I sleep 5 to 6 hours a night, going out for a run calms my nerves and clears up my mind. Sometimes I like to go for harder runs, other days a softer jog.

Lately, I’ve been enjoying podcasts during my longer runs, usually listening to shows such as Radiolab, 99% Invisible, Song Exploder, and Imaginary Worlds. These podcasts tell stories that range from the intellectual, to the creative, the innovative, and the emotional. It’s a great way to lose myself in another world when I’m running for several hours – and especially so when I am literally lost during a run.

But that is a story for another time.

One recent podcast episode talked about EVE Online, a massive multi-player game with an economy that has been compared to small countries (both for scale and for theory). EVE Online is set in a virtual universe and has its own monetary system. The in-game currency, known as ISK (InterStellar Kredit), can be bought with real-world money based on a live exchange rate. Players spend ISK to boost their avatar’s lives, purchase virtual weapons, and even build space battleships that are said to span the size of entire real-world suburban neighborhoods. Just one of these virtual warships can cost several thousands of real US dollars.

Because similar to the real world, a damning amount of money is spent on virtual wars.

And so the podcast told a story of how several fleets of virtual spaceships, worth hundreds of thousands of real American dollars in all, were destroyed on a single fateful day in January 2014, during a battle known as the Bloodbath of B-R5RB. The clash was part of a larger conflict between American video gamers and “The Russians”, for inter-galactic domination! (Cue the “muahaha” evil laugh).

In the end, the Americans won the larger war, in part because the opposition had imploded. “The Russians” were actually a collection of gamers from different Eastern European countries; their downfall coincided with the timeline of the Crimea crisis, during which Ukrainian gamers began sabotaging the cyber lands of their Russian (actual Russians) allies while, outside their computer screens, the Russian Federation was conducting a takeover of Ukraine’s Crimea.

Imagine that: the fate of a video game decided by real-life current events!

The podcast had made me curious about what it means to be headed towards a reality that’s sharing an expanding grey area with virtual worlds. Will we one day forget the line that separates the two? Worst yet, could we forget that technology is built from reality, and not the other way around?

So then I also imagined a world flipped, one where the computerized space is deciding the fate of reality itself. Perhaps the next major economic disaster will stem from provocations to digital currency systems. Or maybe today’s tech corporations will eventually replace the political structures of nation-states. Internet access might be added as a basic human right, placed right next to the fight against hunger and poverty. What if you could no longer buy groceries unless you had an Amazon account linked to your Bitcoin savings?

Introducing 2084: The futuristic version of Winston and his Big Brother!

Jump to a few weeks after that long run. I am on a plane flying from New York to London. I turned on my Amazon Kindle and tapped on “Ready Player One” by Ernest Cline, a novel I’ve been meaning to read before Spielberg’s film adaptation gets released. I immediately became absorbed in the story, which is set in a dystopia where everyone finds their escape from the broken world via a virtual reality game known as OASIS. The OASIS universe is depicted as a far better place, with free education, glamorous dance clubs, unlimited resources and even giant mech robots!

But as fun as the book was to read, the story reminded me of how easily digital space can become the primary reality for people. The plot was centered on people living out entire lives via a virtual universe, and somehow this fantasy land didn’t seem so foreign. Does that mean we are on a trajectory towards such a reality? (Insert dramatic music here.)

In the end, both the podcast and novel led me to wonder: just how aware are we of technology’s evolution? Social media and smart phone apps are no longer the novelty items that we used to play with to take a breather from the real world. Instead, they are intertwined into every day life. Myspace was great for personalizing a digital profile, but Facebook has flourished by personalizing the news content that we check first thing each morning. How dramatically technology has transformed in just one decade.

Finally, back to my run. It was a slightly longer session because I did actually get lost. After three hours of running on gravel roads and concrete bridges, I pulled out my phone to check the stats recorded by my running app. Just as I tapped the “Finished run” option, the application unexpectedly restarted itself and to my horror had erased all traces of the run! Gone were my mile splits, average pace, and estimated calories burned.

So, in a bout of tepid humor, I texted my friend, “If the app doesn’t show it, did I even run?”

But again, a story for another time.

Sources:

https://www.imaginaryworldspodcast.org/world-war-eve.html
http://reason.com/archives/2014/05/07/a-video-game-economy-the-size
https://www.wired.com/2014/02/eve-online-battle-of-b-r/
https://www.youtube.com/watch?v=DRe9DBosLD4
“Ready Player One” by Ernest Cline
And “Thank you!” to an individual who introduced me to some wonderful podcasts

Running into 2084

My Own Imaginary Number

hexting

So this all started during elementary school.

When I was about 6 years old, I tried to invent a new ‘whole’ number – to squeeze in somewhere between, say, integers 2 and 3. I couldn’t conceptually think of one, and promptly gave up as I started learning about decimals and fractions.

Flash forward to near-present day at the office.

A coworker and I were discussing a series of numbers – 2, 8, 68, 128, 260, and so on – generated by a computer program. Each figure represented some collection of comments for financial analysis, but to the our (untrained) human eyes, it might as well have come straight out of a science fiction novel.

Imagine if we were deciphering these numbers, etched on the interior walls of an ancient alien spacecraft1!

Except, we were studying them on our very-human computer screens.

What we learned was that the figures were converted from hexadecimal values, which counts in a base-16 format – meaning it is a numerical system comprised of 16 base units from integers 0 thru 9, and then letters A thru F2. By comparison, we normally count in base-10 (the decimal system), using digits 0 thru 9 to construct numerical values.

In a way, this is like saying 20 regular dollars’ worth of groceries would cost 14 dollars in hex. Better yet: a low score of 10 points (out of 100) on your math exam would become an A3!

The differences between hex and decimal counting point to different needs. Hex is generally referenced for computer science4, while the decimal system is…well, some speculate it’s because humans have 10 fingers5. Perhaps if we had evolved to have 4 fingers on each hand (like the Simpsons), we would be counting in base-8!

My 6-year-old self sure would have been proud if I were able to invent the number 8 from (7 + 1).

Truth be told, my (really unsophisticated) imagination for numbers still exists – for example, grasping what ‘infinity’ looks like. Think: outer space and the ever-expanding universe. Is there an “edge” to the seemingly endless empty black space?

On another note, I also have a difficult time coming up with new colors…


Additional thoughts:

  1. Anyone read Sphere by Michael Crichton?
  2. This means that the number 15 is represented by a single hexadecimal digit, F. However, the number 16 will require a second hex digit, translating it to hexadecimal 10. Similarly in the decimal system, the number 10 is when we would need a second digit (from 9 to 10).
  3. From my not-a-computer-science-expert notes: This is not to say that a hexadecimal world would have lower grading standards (ha ha). In reality, converting from base-10 to base-16 serves computing needs and computational memory purposes . For example, we can store data (numbers, texts, or both) in binary for computers to process. The same goes for hex.
  4. Also from my notes: The reason base-16 is useful for programming is because computers are founded on binary (two states, 0 and 1) logic – a circuit operation starts at being either True or False (which is supplemented by And, Or, Xor operators, etc.). As modern computing becomes more powerful and computers can hold more memory, we move up in factors of 2. 16 is derived from 8, from 4, and finally from 2. On the other hand (no pun intended), base-10 is not used because while 10 is divisible by 2, 5 is not.
  5. One of the forums pointed out that the ancient Babylonians employed base-60 numerals. This was based on the 3 knuckles on each of the 4 fingers (excluding the thumb, which is used to point to these knuckles) on one hand, and then using each of the 5 fingers on your other hand. 3 x 4 x 5 = 60. This is why we have 60 seconds in a minute and 60 minutes in an hour!
  6. On the cartoon: Hexadecimal outputs can be denoted by the “0x…”, similar to how binary might start with “0b…”. Also, the cell phone contains a message!

Sources:
http://www.businessinsider.com/octal-numbers-and-fingers-2014-6
http://math.stackexchange.com/questions/141184/does-the-word-integer-only-make-sense-in-base-10
http://math.stackexchange.com/questions/8734/why-have-we-chosen-our-number-system-to-be-decimal-base-10
https://www.youtube.com/watch?v=R9m2jck1f90

My Own Imaginary Number