Skip to content

AI in Cambridge #1: The Birth of Neural Nets Mystery Cache

Hidden : 6/28/2022
Difficulty:
2 out of 5
Terrain:
1.5 out of 5

Size: Size:   micro (micro)

Join now to view geocache location details. It's free!

Watch

How Geocaching Works

Please note Use of geocaching.com services is subject to the terms and conditions in our disclaimer.

Geocache Description:


Note: Due to overly diligent gardeners, I moved the cache location on August 17 2022. The clues below haven't changed, but the location (and therefore the equations yielding the cache location) has. 

The cache is not at the listed coordinates, but is a short walk away. This is the first in a series of mystery caches devoted to AI. This cache requires no computer science experience; Google searches are all you need. BYOP, and please replace the cache as found. Message the cache owner or mail AI.in.Cambridge@gmail.com if you need help.

Let's begin this cache series at the beginning of AI! Back in 1943, McCullough and Pitts invented the idea of a fundamental computing unit, inspired by neurons and much simpler than a computer, that could classify input into two categories. Frank Rosenblatt created the first working version in 1958.

Note: not all answers are single digits!

A = The number of letters in the name of that three-syllable computing unit.

One of the so-called fathers of AI worked within sight of the cache location. He and a coauthor wrote a book proving that a simple collection of those fundamental computing units couldn't learn much.

B = The number of letters in either his first or last name (they're the same length).

But if you arrange those units in layers – a neural network – you can learn a lot. The individual "neurons" are simple machines that take a set of numbers (the input), multiply each by another number called the weight, and sum the results. The result is fed through a simple mathematical function, and the output can then become the input to another neuron in the network. The neural networks in our brains have billions of neurons; neural networks in software can too.

A network of these neurons, with just the right weights in place, can make good decisions, like determining whether an image portrays a cat or a dog. But how do you figure out the right weights? You make some guesses when you begin, and then change them over time as the system is exposed to new data. A technique invented in 1986 is commonly used to adjust those weights and has become a mainstay of connectionism.

C = The number of letters in the name of this technique. (Hint: C > 10.)

This cache is located in what was once called AI Alley, and is directly opposite what was once the headquarters of a long-defunct company called Symbolics, which built computers optimized for a particular programming language. Those computers were in heavy use at the MIT AI Lab (also within sight of the cache) at the time.

D = The number of letters in the name of that programming language.

Neural nets are often used for supervised learning, where a labeled dataset is used to train the neural network; the more data, the better. One popular dataset for training image classification neural nets contains 60,000 pictures of trucks, ships, cars, planes, birds, deer, frogs, dogs, horses, and of course cats.

E = The two digit number in the dataset's name.

The cache is located at:

N 42 21.        (A • B • C) – (A + B + C + D)
W 71 05.       (A • D • E) + (B – C – D)

Additional Hints (Decrypt)

Unatvat terra ovfba ghor nggnpurq gb n sraprcbfg, irel pybfr gb gur tebhaq. Ybbx sbe n "Pnzoevqtr Genssvp" cyngr ba gur tebhaq, naq jnyx njnl sebz gur fgerrg.

Decryption Key

A|B|C|D|E|F|G|H|I|J|K|L|M
-------------------------
N|O|P|Q|R|S|T|U|V|W|X|Y|Z

(letter above equals below, and vice versa)