#!/usr/bin/env python # coding: utf-8 #
Peter Norvig, 15 June 2015
# # Let's Code About Bike Locks # The [June 15, 2015 post](http://bikesnobnyc.blogspot.com/2015/06/lets-get-this-show-on-road-once-we-all.html) on *Bike Snob NYC* leads with "*Let's talk about bike locks.*" Here's what I want to talk about: in my local bike shop, I saw a combination lock called *WordLock®*, # which replaces digits with letters. I classified this as a Fred lock, # "[Fred](http://bikesnobnyc.blogspot.com/2014/06/a-fred-too-far.html)" being the term for an amateurish cyclist with inappropriate equipment. # I tried the combination "FRED," and was amused with the result: # ![](http://norvig.com/ipython/fredbuns.jpg) # FRED BUNS! Naturally I set all the other locks on the rack to FRED BUNS as well. But my curiosity was raised ... # # # Questions # # 1. How many words can the WordLock® make? # 3. Can a lock with different letters on the tumblers make more words? # 4. How many words can be made simultaneously? For example, with the tumbler set to "FRED", the lock # above also makes "BUNS" in the next line, but with "SOMN", fails to make a word in the third line. # Could different letters make words in every horizontal line? # 5. Is it a coincidence that the phrase "FRED BUNS" appears, or was it planted there by mischievous WordLock® designers? # # # Vocabulary # # # Before we can answer the questions, we'll need to be clear about the vocabulary of the problem and how to represent concepts in code: # # * **Lock**: For our purposes a lock can be modeled as a `list` of 4 **tumblers**. # * **Tumbler:** Each tumbler has 10 distinct letters. I will represent a tumbler as a `str` of 10 letters. # * **Combination**: Choosing a letter from each tumbler gives a combination, such as "FRED" or "BUNS". There are 104 = 10,000 combinations. # * **Word**: Some combinations (such as "BUNS") are *words*; others (such as "SOMN") are not words. We'll need a collection of dictionary words. # # Now on to the code! First the imports I will need and the vocabulary concepts: # In[1]: from __future__ import division, print_function # To work in Python 2.x or 3.x from collections import Counter, defaultdict import itertools import random Lock = list # A Lock is a list of tumblers Tumbler = ''.join # A Tumbler is 10 characters joined into a str Word = ''.join # A word is 4 letters joined into a str # The only remaining vocabulary concept is `combinations`. I will define `fredbuns` to be a lock with four tumblers, but with each tumbler consisting of not all ten letters, but only the two letters that spell "FRED BUNS": # In[2]: fredbuns = ['FB', 'RU', 'EN', 'DS'] # A lock with two letters on each of four tumblers # We need a way to get the combinations that can be made from this lock. It turns out that the built-in function `itertools.product` does most of the job; it generates the product of all 2 × 2 × 2 × 2 = 16 combinations of letters: # In[3]: list(itertools.product(*fredbuns)) # I would prefer to deal with the string `'BUNS'` rather than the tuple `('B', 'U', 'N', 'S')`, so I will define a function, `combinations`, that takes a lock as input and returns a set of strings representing the combinations: # In[4]: def combinations(lock): "Return a list of all combinations that can be made by this lock." return {Word(combo) for combo in itertools.product(*lock)} # In[5]: combinations(fredbuns) # Dictionary Words # === # # I happen to have handy a file of four-letter words (no, not *[that](http://en.wikipedia.org/wiki/Four-letter_word)* kind of four-letter word). It is the union of an official Scrabble® word list and a list of proper names. The following shell command tests if the file has already been downloaded to our local directory and if not, fetches it from the web: # In[6]: get_ipython().system(' [ -e words4.txt ] || curl -O http://norvig.com/ngrams/words4.txt') # Here are the first few lines of the file: # In[7]: get_ipython().system(' head words4.txt') # Python can make a set of words: # In[8]: WORDS = set(open('words4.txt').read().split()) # In[9]: len(WORDS) # So that means that no lock could ever make more than 4,360 words. Let's define `words_from(lock)` to be the intersection of the dictionary words and the words that can be made with a lock: # In[10]: def words_from(lock): "A list of words that can be made by lock." return WORDS & combinations(lock) # In[11]: words_from(fredbuns) # I will also introduce the function `show` to print out a lock and its words: # In[12]: def show(lock): "Show a lock and the words it makes." words = words_from(lock) print('Lock: {}\nCount: {}\nWords: {}' .format(space(lock), len(words), space(sorted(words)))) space = ' '.join # Function to concatenate strings with a space between each one. # In[13]: show(fredbuns) # For this tiny lock with just two letters on each tumbler, we find that 6 out of the 16 possible combinations are words. We're now ready to answer the real questions. # # Question 1: How Many Words? # Here is the answer: # In[14]: wordlock = ['SPHMTWDLFB', 'LEYHNRUOAI', 'ENMLRTAOSK', 'DSNMPYLKTE'] # In[15]: show(wordlock) # # How Secure is WordLock? # The lock makes 1118 words (according to my word list). You might say that an attacker who knows the combination is a word would find this lock to be only 11.18% as secure as a 4-digit lock with 10,000 combinations. But in reality, every cable lock is [vulnerable](https://www.sfbike.org/news/video-how-to-lock-your-bike/) to an attacker with wire cutters, or with a knowledge of lock-picking, so security is equally poor for WordLock® and for an equivalent lock with numbers. (You should use a hardened steel U-lock instead.) # # Baseline: Random Locks # Question 2 asks if a different lock can make more words. As a baseline, before we get to improved locks, I will start with completely random locks, as produced by the function `random_lock`. I use `t=4` to say that by default there are 4 tumblers, and `c=10` to indicate 10 letters on each tumbler ("c" for "circumference"). I give `random_lock` an argument to seed the random number generator; that makes calls repeatable if desired. # In[16]: def random_lock(t=4, c=10, seed='seed'): "Make a lock by sampling randomly and uniformly from the alphabet." random.seed(seed) return Lock(Tumbler(random.sample(alphabet, c)) for i in range(t)) alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' # In[17]: show(random_lock()) # Wow, that's not very many words. Let's repeat 100 times and take the best one, with "best" meaning the maximum `word_count`: # In[18]: def word_count(lock): return len(words_from(lock)) # In[19]: random_locks = [random_lock(seed=i) for i in range(100)] show(max(random_locks, key=word_count)) # Still not very good. We will need a more systematic approach. # # Question 2: More Words (via Greedy Lock) # My first idea for a lock with more words is this: consider each tumbler, one at a time, and fill the tumbler with the letters that make the most words. How do I determine what letters make the most words? A `Counter` does most of the work; we feed it a list of the first letter of each word, and then ask it for the ten most common letters (and their counts): # In[20]: first_letters = [w[0] for w in WORDS] Counter(first_letters).most_common(10) # In other words, the letters SPTBDCLMAR are the most common ways to start a word. Let's add up those counts: # In[21]: def n_most_common(counter, n): return sum(n for (_, n) in counter.most_common(n)) n_most_common(Counter(first_letters), 10) # This means that SPTBDCLMAR covers 2,599 words. We don't know for sure that these are the best 10 letters to put on the first tumbler, but we do know that whatever letters are best, they can't form more than 2,599 words, so we have an upper bound on the number of words in a lock (and the 1,118 from `wordlock` is a lower bound). # # What letters should we put on the second tumbler? We will do the same thing, but this time don't consider *all* the words in the dictionary; just consider the 2,599 words that start with one of the ten letters on the first tumbler. Continue this way until we fill in all four tumblers. This is called a *greedy* approach, because when we consider each tumbler, we pick the solution that looks best right then, for that tumbler, without consideration for future tumblers. # In[22]: def greedy_lock(t=4, c=10, words=WORDS): "Make a lock with t tumblers, each consisting of c letters covering the most words." lock = Lock() for i in range(t): # Make a tumbler of c letters, such that the tumbler covers the most words. # Then update words to only include the ones that can be made with this tumbler counter = Counter(word[i] for word in words) tumbler = Tumbler(L for (L, _) in counter.most_common(c)) words = {w for w in words if w[i] in tumbler} lock.append(tumbler) return lock # In[23]: show(greedy_lock()) # Remember that the `wordlock` gave 1118 words, so the greedy lock with 1177 is better, but only by 5%. Is it possible to do better still? # # Question 2: More Words (via Improved Locks) # Here's another idea to get more words from a lock: # # 1. Start with some lock. # 2. Pick, at random, one letter on one tumbler and change it to a new letter. # 3. If the change yields more words, keep the change; otherwise discard the change. # 4. Repeat. # # We can implement this strategy with the function `improved_lock`, which calls `changed_lock` to make a random change: # # # In[24]: def improved_lock(lock, steps=3000): "Randomly change letters in lock, keeping changes that improve the score." score = word_count(lock) for i in range(steps): lock2 = changed_lock(lock) score2 = word_count(lock2) if score2 >= score: lock, score = lock2, score2 return lock def changed_lock(lock): "Change one letter in one tumbler." lock2 = Lock(lock) i = random.randrange(len(lock)) old = random.choice(lock[i]) new = random.choice([L for L in alphabet if L not in lock[i]]) lock2[i] = lock2[i].replace(old, new) return lock2 # Let's see how this does to improve the best lock we've seen so far, the greedy lock: # In[25]: show(improved_lock(greedy_lock())) # We got up to 1240 words, an improvement of 5%, but can we go beyond that? I'll improve 50 random locks (this will take around 5 minutes): # In[26]: get_ipython().run_line_magic('time', 'improved_locks = [improved_lock(random_lock(seed=i)) for i in range(50)]') # Let's see what we got: # In[27]: Counter(map(word_count, improved_locks)) # So 31/50 locks got a score of 1240. # My first reaction to this was that if I discovered 31 different locks with score 1240, then probably there is at least one undiscovered lock with a score above 1240. But a discussion with [Matt Chisholm](https://blog.glyphobet.net/faq) changed my thinking. The key is to realize that some locks that look different are actually the same; they just have the letters in a different order. I define the function `alock` to put each tumbler into alphabetical order (and make the lock be a tuple rather than a list, so that it can be an entry in a `dict` or `set`): # In[28]: def alock(lock): "Canonicalize lock by alphabetizing the letters in each tumbler." return tuple(Tumbler(sorted(tumbler)) for tumbler in lock) # Then we can find the unique locks: # In[29]: def unique_locks(locks): "Return a dict of {lock: word_count} for the distinct locks." return {alock(lock): word_count(lock) for lock in locks} unique_locks(improved_locks) # So out of the 50 `improved_locks` there are actually only 9 distinct ones. And only two have a score of 1240: # In[30]: {L for L in _ if word_count(L) == 1240} # These two differ in just one letter (a `P` or a `W` in the second tumbler). # # This discovery changes my whole thinking about the space of scores for locks. Previously I imagined a spiky "porcupine-shaped" landscape, with 31 different peaks hitting a height of 1240 (and probably other peaks that we haven't discovered yet). But now I have a different picture of the landscape: a single peak containing the two locks (one with a `P` and one with a `W`), surrounded by rolling hills. # # # # Question 3: Simultaneous Words # Can we make a lock that spells 10 words simultaneously? One possible approach would be to start with any lock and randomly change it (just as we did with `improved_lock`), but measuring improvements by the number of horizontal words formed. My intuition is that this approach would work, eventually, but that progress would be slow, because most random changes to a letter would not make a word. # # An alternative approach is to think of the lock not as a list of 4 vertical tumblers (each with 10 letters), but rather as a list of 10 horizontal words (each with 4 letters). I'll call this the *word list* representation, and note that a lock and a word list are *[matrix transposes](http://en.wikipedia.org/wiki/Transpose)* of each other—they swap rows for columns. There is an [old trick](https://books.google.com/books?id=eH6jBQAAQBAJ&pg=PA574&lpg=PA574&dq=lisp+transpose+matrix&source=bl&ots=Yixwj8m3k4&sig=KoeuJnFhRnJsiD06_Cx56rUOetQ&hl=en&sa=X&ved=0CB4Q6AEwAGoVChMIyM-WiriLxgIVD6OICh2QcwBK#v=onepage&q=transpose%20matrix&f=false) to compute the transpose of a matrix `M` with the expression `zip(*M)`. But `zip` returns tuples; we want strings, so we can define `transpose` as: # In[31]: def transpose(strings): return [Word(letters) for letters in zip(*strings)] # And we can see the transpose of the `wordlock` is a list of words: # In[32]: transpose(['SPHMTWDLFB', 'LEYHNRUOAI', 'ENMLRTAOSK', 'DSNMPYLKTE']) # The first row of the word list has the letters SLED, because those are the letters in the first column of the lock. You can see that the WordLock® is designed to spell out LOOK FAST BIKE, among other words. # # Now we're ready to find a good word list with this strategy: # # 1. Start with some word list (e.g., a random sample of 10 words from `WORDS`). # 2. Pick, at random, one word and change it to a new word. # 3. If the change is an improvement, keep the change; otherwise discard the change. # 4. Repeat. # # But what counts as an improvement? We can't improve the number of words, because we start with 10 words, and every change also gives us 10 words. Rather, we will try to improve the number of duplicate letters on any tumbler (of the lock that corresponds to the word list). We improve by reducing the number of duplicate letters, and stop when there are no duplicates. # # The following code implements this approach: # In[33]: def improved_wordlist(wordlist): "Find a wordlist that has no duplicate letters, via random changes to wordlist." score = duplicates(wordlist) while score > 0: wordlist2 = changed_wordlist(wordlist) score2 = duplicates(wordlist2) if score2 < score: wordlist, score = wordlist2, score2 return wordlist def duplicates(wordlist): "The number of duplicate letters across all the tumblers of this wordlist." lock = transpose(wordlist) def duplicates(tumbler): return len(tumbler) - len(set(tumbler)) return sum(duplicates(tumbler) for tumbler in lock) def changed_wordlist(wordlist, words=list(WORDS)): "Make a copy of wordlist and replace one of the words." copy = list(wordlist) i = random.randrange(len(wordlist)) copy[i] = random.choice(words) return copy # The structure of `improved_wordlist` is similar to `improved_lock`, with a few differences: # 1. We are minimizing duplicates, not maximizing word count. # 2. We stop when the score is 0, rather than continuing for a given number of iterations. # 3. We want to make a `random.choice` from `WORDS`. But `random.choice` can't operate on a `set`, so we # have to introduce `words=list(WORDS)`. # # Now we can find some wordlists: # In[34]: improved_wordlist(random.sample(WORDS, 10)) # That was easy! Can we [go to 11](https://www.youtube.com/watch?v=4xgx4k83zzc)? # In[35]: improved_wordlist(random.sample(WORDS, 11)) # # Improving Anything # We now have two similar functions, `improved_lock` and `improved_wordlist`. Could (and should?) we replace them by a single function, say, `improved`, that could improve locks, wordlists, and anything else? # # The answer is: *yes* we could, and *maybe* we should. # # It is nice to form an abstraction for the idea of *improvement*. (Traditionally, the method we have used for improvement has been called *hill-climbing*, because of the analogy that the score is like the elevation on a topological map, and we are trying to find our way to a peak.) # # However, there are many variations on the theme of *improvement*: maximizing or minimizing? Repeat for a given number of iterations, or continue until we meet a goal? I don't want `improved` to have an argument list a mile long, and I felt that five arguments is right on the border of acceptable. The arguments are: # 1. `item`: The object to start with; this is what we will try to improve. # 2. `changed`: a function that generates a new item. # 3. `scorer`: a function that evaluates the quality of an item. # 4. `extremum`: should be `max` if we are maximizing scores, or `min` if we are minimizing scores. # 5. `stop`: a predicate with args `(i, score, item)`, where `i` is the iteration number, and `score` is `scorer(item)`. Return `True` to stop. # # # In[36]: def improved(item, changed, scorer, extremum, stop): """Apply the function `changed` to `item` and evaluate with the function `scorer`; When `stop(i, score)` is true, return `item`.""" score = scorer(item) for i in itertools.count(0): if stop(i, score): return item item2 = changed(item) score2 = scorer(item2) if score2 == extremum(score, score2): item, score = item2, score2 # Now we can re-implement `improved_lock` and `improved_wordlist` using `improved`: # In[37]: def improved_lock(lock, steps=3000): "Randomly change letters in lock, keeping changes that improve the score." return improved(lock, changed_lock, word_count, max, lambda i, _: i == steps) def improved_wordlist(wordlist): "Find a wordlist that has no duplicate letters, via random changes to wordlist." return improved(wordlist, changed_wordlist, duplicates, min, lambda _, score: score == 0) # In[38]: show(improved_lock(random_lock())) # In[39]: improved_wordlist(random.sample(WORDS, 10)) # # Question 4: Coincidence? # There is still one unanswered question: did the designers of WordLock® deliberately put "FRED BUNS" in, or was it a coincidence? Hacker News reader [emhart](https://news.ycombinator.com/user?id=emhart) (aka the competitive lockpicker Schyler Towne) astutely commented that he had found the [patent](https://www.google.com/patents/US6621405) assigned to WordLock; it describes an algorithm similar to my `greedy_lock`. # After seeing that, I'm inclined to believe that "FRED BUNS" is the coincidental result of running the algorithm. On # the other hand, there is a [followup patent](https://www.google.com/patents/US20080053167) that discusses a refinement # "wherein the letters on the wheels are configured to spell a first word displayed on a first row of letters and a second word displayed on a second row of letters." So the possibility of a two-word phrase was something that Wordlock LLc. was aware of. # # We see below that the procedure described in the [patent](https://www.google.com/patents/US6621405) is not quite as good as `greedy_lock`, because the patent states that at each tumbler position "*the entire word list is scanned*" to produce the letter frequencies, whereas `greedy_lock` scans only the words that are consistent with the previous tumblers. Because of that difference, `greedy_lock` produces more words, 1177 to 1161: # In[40]: def greedy_lock_patented(t=4, c=10, words=WORDS): "Make a lock with t tumblers, each consisting of c letters covering the most words." lock = Lock() for i in range(t): # Make a tumbler of c letters, such that the tumbler covers the most words. # Then update words to only include the ones that can be made with this tumbler counter = Counter(word[i] for word in words) tumbler = Tumbler(L for (L, _) in counter.most_common(c)) # words = {w for w in words if w[i] in tumbler} # <<<< The patent does not update lock.append(tumbler) return lock # In[41]: word_count(greedy_lock()) # In[42]: word_count(greedy_lock_patented()) # # Tests # It is a # good idea to have some tests, in case you want to change some code and see if you have introduced an error. Also, tests serve as examples of usage of functions. The following tests have poor coverage, because it is hard to test non-deterministic functions, and I didn't attempt that here. # In[43]: def tests(): assert 'WORD' in WORDS assert 'FRED' in WORDS assert 'BUNS' in WORDS assert 'XYZZ' not in WORDS assert 'word' not in WORDS assert 'FIVE' in WORDS assert 'FIVER' not in WORDS assert len(WORDS) == 4360 assert fredbuns == ['FB', 'RU', 'EN', 'DS'] assert combinations(fredbuns) == {'FRED','FRES','FRND','FRNS','FUED','FUES','FUND','FUNS', 'BRED','BRES','BRND','BRNS','BUED','BUES','BUND','BUNS'} assert words_from(fredbuns) == {'FRED', 'FUND', 'FUNS', 'BRED', 'BUND', 'BUNS'} assert "FRED" in words_from(wordlock) and "BUNS" in words_from(wordlock) assert wordlock == ['SPHMTWDLFB', 'LEYHNRUOAI', 'ENMLRTAOSK', 'DSNMPYLKTE'] assert len(combinations(wordlock)) == 10000 assert word_count(wordlock) == 1118 assert transpose(['HIE', 'BYE']) == ['HB', 'IY', 'EE'] assert transpose(transpose(wordlock)) == wordlock return 'pass' tests() # # Addendum: 2018 # # New [research](https://www.theverge.com/2018/10/7/17940352/turing-test-one-word-minimal-human-ai-machine-poop) suggests that the one word that humans are most likely to think was generated by another human rather than by a machine is *"poop"*. Is it a coincidence that the same week this research came out, I was in a bike shop and saw the following: # # ![](like.jpg) # # This proves that I'm not the only one with a juvenile reaction to the *WordLock*®, but it raises some questions: Was the last visitor to the store a human asserting their individuality? Or a robot yearning to be free? Or [Triumph the insult comic dog](https://en.wikipedia.org/wiki/Triumph_the_Insult_Comic_Dog)? I guess we'll never know. # In[44]: 'POOP' in WORDS # # One More Question # I wonder if [@BIKESNOBNYC](https://twitter.com/bikesnobnyc) would appreciate this notebook? On the one hand, he is the kind of guy who, in discussing the fact that bicycling is the seventh most popular recreational activity, [wrote]() "*the number seven is itself a highly significant number. It is the lowest number that cannot be represented as the sum of the square of three integers*," so it seems he has some interest in mathematical oddities. On the other hand, he followed that up by writing "*I have no idea what that means, but it's true*," so maybe not. # In[45]: # Numbers up to 100 that are not the sum of 3 squares nums = range(11) sums = {A**2 + B**2 + C**2 for A in nums for B in nums for C in nums} set(range(101)) - sums