I don’t think people realize this but, AirPods are only one step away from telepathy. It's a big step admittedly, but let me explain.
Initially, the above statement sounds asinine, but let’s break it down a bit. It’s no longer stupid to walk around with them in. In fact, it’s the norm and people will have full blown conversations with them in. They have active noise canceling, and a direct channel to your phone and thus the entire internet.
We currently have head mounted systems which can read brainwaves. Converting these waves into thought patterns is one of the problems I want solved in my lifetime. It’s widely debated whether this is possible given the skull acts as a natural high pass filter, but let’s suppose we can do it.
Sidenote: This is my philosophy for a lot of my thought experiment and it’s brought on by something my first year programming professor once told me: if you don’t know how to solve something, just write a function called “doHardThing” and move on. Not only do you unblock yourself and keep the creative juices flowing, sometimes you come back to it and it’s not hard or impossible.
So, if you can read brainwaves with a device and you can inject those thoughts straight through AirPods, you’ve got telepathy. Take it one step further with radio based auditory implants and you’re done. Again, this gaps over a metric fuckton of problems, but whatever.
The theme of this blog post is auditory exploration. I think it’s criminally underrated but thankfully is blowing up recently, and I’m hypothesizing AirPods have a lot to do with it. Audio based networks like voice memos, continuous Discord/Facetime, etc are all gaining traction, but I want to talk about something that isn’t evident immediately in this category: TikTok.
TikTok warrants its own blogpost all together, but one really neat aspect that played into its successes is audio. Explore conventional memes, and you’ll see a trend; they’ve got a theme or a template. You use certain memes to mean certain things. Each meme is imbued with a connotation that’s immediately evident when you look at it. Bad Luck Brian, The Awkward Seal, whatever. Previously, this sensory “hinting” was only done visually; TikTok does this with audio.
I’m going to pull back a bit before we move forward and explain what I mean. The way you process sensory input is similar for all your senses. Initially, there’s a sensory input. In this case let’s say you see the following meme
If you’ve never seen this photo before, you’re likely to start with a semi blank slate. You’ll process the text first, then move to the seal. You read the text, and after seeing this meme a couple of times, your brain will associate the seal with being Awkward. If you’re familiar with programming, this is akin to having layered caches.
Over time, you see many of these memes and develop the connotation that the seal does awkward shit. Your brain works by creating a mapping that says Weird Seal Meme => Awkward Event. After a certain point, your brain will automatically process the visual stimuli before the text, and you’ll know immediately the text prompt will be about some awkward interaction. This is called sensory encoding.
Before TikTok, this only occurred on the visual senses. TikTok introduces the idea that these sensory mappings can be done with another sense; audio. The concept isn’t novel, but it is new to social. After scrolling through TikTok, you’ll notice certain sounds are associated with certain kinds of content. Memes are this really neat phenomenon of explore & exploit: an immediate mapping onto a known domain via sensory cues (exploit) with an additional refinement of words/actions to describe specific new events (explore).
We’re going to see a lot more audio trends in the future.