Search Engine Stupidity, Part 4

Talk about how information technology works is often cast in mystical terms for the "uninitiated". For example, we're often told about these things called "algorithms" that will replace jobs, protect national security/privacy, and so on and so forth.

We should frame these things in simpler, more transparent terms. All an "algorithm" is is a set of instructions for computers. When we talk about "losing jobs to AI", this means people choosing to take humans out of processes and replace them with machines and then letting machines handle decision making.

For example, instead of having a receptionist, a business might choose to have a generic program with a bunch of prompts/menus the user has to navigate through. 1 The precursor to a system like this might be to have receptionists follow some script obeying rules rather than using their brains to help customers solve actual problems.

Search engines are similar. You can think of them as a "librarian program". You can ask this AI thing for some information (using a search query) and it will return some result.

This process is opaque—it is a black box you cannot look into and study directly. Google does what Google will do.

An Exmaple Algorithm

Let's try to dress things up to look fancy.

var special_topics = [
    "federal reserve",
    "tech censorship",
    "9-11 conspiracy",
    // ...
];

if (search in special_topics) {
    return special_topics[search]
} else {
    // Use default procedure
    normalSearchRoutine();
}

In this code (another word people will use to cloak their activities in mysticism), we return one type of result for certain topics and other types of results for other topics.

The user will approach the search engine and not know how this works—they just enter a query and trust the results.

Taking your Job

This is what many of the "algorithms" of big tech are doing. The game plan is,

  1. Take some human job, and make it mechanical/codified (like how a call center worker may be made to read a script)
  2. Slowly reduce human agency, involvement in these processes. You might have a system that only has a human to sign off something at the last step, but the rest of the process proceeds without any human needing to know/trust another human.
  3. Entirely eliminate humans, reducing the complexity of jobs that need to be done if necessary—rather than offering custom jobs, offer a configuration of a handful of options
  4. Collect data, tweak the algorithms, and try to optimize human interaction.

What a bunch of anti-social nerds.o

Many areas of work still must be done with human hands. But we can expect with more IoT (Internet of Things) technology, robots, drones, and more that more and more sectors will be attacked.

Consider,

  • Digital music playback (with holograms?!??) over live performance
  • self-checkout at stores
  • streaming services
  • mass-produced goods (often not user servicable)
  • public education—textbooks and standardized materials
  • Apple devices (vs building and configuring your own PC)
  • tract homes
  • fast food

In every case above, you can observe the same general pattern.

Solutions

Exercise your agency and take control of what you can. Understand your role in turning over stuff to "algorithms" and consider carefully if adapting some new technology actually makes things any better.

This is not an anti-technology post—indeed I'm writing this from a computer and publishing it to the Internet!

Rather, this is a reminder that tools should remain tools. And we should make tools useful to us (humans) rather than shaping our habits to fit them. You are more than a "cog in the system". You are an individual and you do not have to try to be unique by flipping the switches to configure a "profile" for some algorithm to process and nerds to data perv at.


  1. Often these systems are very annoying. It is easier to talk with people. 

links

social