Computer Science

One among the TwoHardThings in Computer Science

There are only two hard things in Computer Science: cache invalidation and naming things.

Phil Karlton

Reading this post shared in Hacker News today was thought-provoking. One among the TwoHardThings – naming things – might have an implication on society. Am I right when I say that I have a file named blacklist.yaml which stores all those words which are not allowed in my application?

Happy to see that the Internet Engineering Task Force has a draft working towards fixing such terms in their specification documents. This draft mentions certain commonly-used terms like master-slave, whitelist-blacklist and possible alternatives to make these specifications more neutral in terms of our society. While this draft progresses, I feel that the developers can start bringing changes in some places immediately, starting with renaming a file from blacklist.yaml to blocked.yaml etc.

Post Script: My muscle memory still makes me type git push origin master. Who is the master here? Maybe this should be renamed to trunk as in subversion.

Computer Science, Technology

Bringing back machines to life

The word “machine” has its origin in the Greek word “makhana” meaning “device”. Initially used to refer mechanical structures built to perform an intended action, the term “machine” in modern times encompass a lot more – automobiles, computers, farm machinery, factory automation systems, robots – the list goes on. At times, reading and learning about those old machines which paved the way for the new and technologically advanced ones is fascinating.

I am not an automobile enthusiast who gets down to the nitty-gritty of every vehicle I drive. Nevertheless, a few vehicle restoration videos I watched on YouTube in recent times inspired me, not as a driver, but as an engineer. In most of these videos, the YouTuber rescue (that’s the word most of them use) and restore a car or a truck which has been abandoned by its owner. Here is one of them I bookmarked:

It is the sheer passion for engineering that drives these folks to search for old vehicles and bring them back to life. Often, the search for a spare part makes them travel miles. One can imagine how satisfied they would be looking at those restored engines moving again. Ain’t they giving rebirth to these old machines?

My thoughts were in conjunction with computers while watching these videos. Does anyone care to bring back vintage computers to life? Even if anyone does, is it worth restoring them considering their sluggishness compared to modern computers? Any decent smartphone we have today has more computing power than old computers and can perform millions of operations in fractions of a second. A restored van from the 1960s can still move at 40 mph, and satisfy the driver with the pleasure of driving a vintage vehicle. But a restored computer from the 1970s – say an IBM 5100 with memory ranging from 16-64KiB – will never satisfy the user now as he/she won’t be able to listen to his/her favourite music album! Once restored, it might be exciting to see something coming upon the screen but that is more or less useless in the modern era.

There are a few vintage computers that are still considered a prized possession – Apple Computer or Apple I, launched by Apple in 1976 or IBM Personal Computer launched by IBM in 1981 etc. The exhibits section of IBM Archives is a good source to read about the evolution of modern computers, starting with The notable first: IBM 701.

Moore’s law holds! New computers replace the older ones quickly by adding more computing power, memory, storage etc., thus paving the way for technological innovation in almost every part of our daily life – automobile, modern medicine, home appliances etc. Present-day software is hungry for more computing power, memory etc. to solve harder problems using better faster algorithms. This makes computers obsolete at a faster rate than ever before. Nobody cares to rescue and restore these as we do with automobiles!


I found a few old/vintage electronic devices at my home – a Yashica film camera bought by my father around 30 years back, a Tape Recorder made by Sharp Corporation in the late 1980s etc. Occasionally, I feel like fixing the tape recorder to make it sing again, then comes the bitter realization that I don’t have any cassettes to listen to – they have been replaced by Spotify or YouTube Music installed on my smartphone.

Computer Science, Ruby on Rails, Technology

LRU-Cache based KeyGenerator in Rails

The Active Support component of Ruby on Rails provides a class named KeyGenerator for generating secret keys. This is a wrapper around OpenSSL’s implementation of PBKDF2 (Password-Based Key Definition Function 2) and is commonly used to generate secrets keys for encryption use-cases.

CachingKeyGenerator is a wrapper around KeyGenerator class which helps us to avoid re-executing the key generation process when it is called using the same salt and the key_size. This is helpful because the key generation process is computationally costly. However, CachingKeyGenerator uses an instance of Concurrent::Map to store the generated keys in memory to avoid the re-execution. This is not helpful when the number of keys to be cached is large so that periodic cleanup is required on least recently used ones.

Assume that we are using CachingKeyGenerator to generate unique secret keys for encrypting personal information of our application’s users. This will return the key if it exists in the Concurrent::Map or re-execute the key generation process and cache the key before returning to the caller. When the userbase is huge, it is not practical to store all the generated keys in the memory of application instances. Instead, it would be ideal to perform a periodic clean up of the cached keys to avoid memory hogging.

We will see how we can solve this issue using a new wrapper around KeyGenerator. Instead of Concurrent::Map in CachingKeyGenerator, this implementation will use ActiveSupport::Cache::MemoryStore which, as per the documentation, is Thread-safe and has an LRU based clean-up mechanism built-in:

This cache has a bounded size specified by the :size options to the initializer (default is 32Mb). When the cache exceeds the allotted size, a cleanup will occur which tries to prune the cache down to three-quarters of the maximum size by removing the least recently used entries.

class LruCachingKeyGenerator
  KEY_EXPIRY =  # expire the keys which are older than a day

  def initialize(key_generator)
    @key_generator = key_generator
    @keys_cache = KEY_EXPIRY)

  # Returns a derived key suitable for use.
  def generate_key(*args)
    cached_key = @keys_cache.fetch(args.join)
    return cached_key unless cached_key.nil?

    # cache miss, generate a new key
    generated_key = @key_generator.generate_key(*args)
    @keys_cache.write(args.join, generated_key)
    return generated_key

As the default cache size is 32Mb, this helps to avoid memory hogging of our application while providing a cache for frequently used keys.