Humankind still rules the world we live in, but the seeds of our demise were, like ancient DNA, planted long ago. Fate and free will, always fodder for philosophical sparring, have become the subjects of even murkier absolutes as algorithms—snips of computer code—insert themselves into nearly every decision and process in our lives. Even the most complex bits of life can be reduced to binary decisions: If this, then that; if that, then this. It’s in this environment that the algorithm thrives. And it’s in this environment that we live.

Thanks to ever-increasing computing speeds and miles of code accounting for nearly every eventuality, algorithms have quickly risen to assume control of many facets of our lives, from Google search to Spotify, from insurance rates to which route we drive on the way home from work. Algorithms can even read our minds. It’s that last development that holds either the most promise or the most peril, depending on with whom you speak.

It’s difficult to identify a seminal moment when algorithms’ march toward dominance tipped toward critical mass. For years, the takeover advanced silently. Only when its effects became indelible, when algorithms’ rise in society became stark, palpable, did people sound the alarm. Needless to say, humans are outmatched. Where we see chaos and unfathomable amounts of data, bots driven by algorithms detect patterns, discern order and make conclusions.

In a way, algorithms and the software that employs them compose a fold of human evolution. Every day, Silicon Valley proposes to outsource more of our lives’ mundane activities. It’s a seductive proposition. Why worry about driving when BMW, Audi and others make algorithms for that? Why waste glove-compartment space on a map when our phones, as directed by Apple or Google, can direct us where to go? Why bother working the bar crowd when a dating-site algorithm from OkCupid can deliver matched personalities by the dozens? Why write down shopping lists when we can bark at our refrigerator “More milk!” and have it delivered by Amazon in an hour? We have designed algorithms to decode our behavior and our brains, and they have succeeded. The only question: Now what? With any luck, the answer humans wish for will be in agreement with that of the bots.

Thirty years ago algorithms first gained notoriety by cracking our financial markets and giving tech-savvy firms the edge on trading floors. Since then they’ve moved on to affect, if not yet control, every aspect of our lives: They decide what jobs we work, whom we marry, where we live, where we drive, what prescriptions we receive, what music we hear, what grades we get and how our money is invested. Algorithms have invaded those nuanced bastions where it would seem impossible to replicate a human’s understanding and touch, tasks such as creating original music, grading written essays, writing original fiction and playing games like poker that mix logical processes with nonlinear takes on human emotions.

So what are these things, these algorithms that are so well poised to replace us? While the name carries a whiff of technical erudition, an algorithm is a simple device. It’s a set of instructions that, given input, produces output. An algorithm needn’t involve computers. A set of instructions for making morning coffee by hand is technically an algorithm. Of course, algorithms can also involve thousands of inputs, database queries, calculations and dynamic, evolving computations. One of the first algorithms many engineering students are required to compose in basic computer science courses is one that will play a perfect game of tic-tac-toe. The inputs are the moves of the human; the outputs, the moves of the computer. All computer languages—C, Java, Ruby, Python, PHP, whatever—are vehicles created to express algorithms. These days, the powers of prediction residing in computer code make tic-tac-toe programs analogous to the sticks chimpanzees use to harvest termites.

The specter of somebody, something, being able to read our thoughts and our intentions by parsing our words seems incredible. But it doesn’t seem impossible. We assume that psychologists operate in something of a similar fashion, though their feedback is neither as demonstrative nor as prompt as that of algorithms. In all these cases, we give algorithms and therapists a lot to work with: We answer questions, we make statements, we talk and talk until the words pile up into the hundreds or thousands. But what if we gave them nothing? What if we offered no words, no typing, no hand gestures—just our faces? Could algorithms still read us?

My three children, like most, have an affinity for television. At home we limit the time they’re allowed to watch, but given the chance, they will turn their attention wholly over to the pixels in front of them. Outside noises, such as that of a parent asking a question, are rendered nonexistent. Their faces, it seems to me, settle into a kind of open-jawed stupor that can stay frozen for the entire length of the program they’re watching. What happens on the screen effects no change on their little countenances. Or at least that’s my view of things, the human view.

Algorithms, however, can sniff out our brains’ inner workings during times like this, when we’re offering few palpable clues. Even seemingly vacant expressions, like those on children watching television, offer data that can be parsed by tools that are sensitive enough to detect them—tools wielded by algorithms, of course.

Several companies are developing this kind of technology, using algorithms to read people’s faces as portals to their brains. One of the companies, Emotient, has a direct lineage to Paul Ekman, who 60 years ago began to study the meaning of facial expressions. Ekman linked different movements of the lips, brow, cheeks and forehead to six distinct emotions: happiness, sadness, anger, disgust, fear and surprise. After spending more than 20 years on the subject, Ekman in 1978 published what he called FACS, or Facial Action Coding System, which categorizes every facial expression. FACS provides a set of standards to decode every natural facial movement, from a slight upturn of the lips to a nose crinkle and an eyebrow dip. After studying the human face and all the ways emotion distorts it, Ekman had classified each derivative of every expression imaginable.

A system so comprehensive has to be complicated. For human psychologists or anybody else, it can take years to master. It’s why Ekman himself, now 81, has for 30 years been one of the most sought-after consultants in the world. He has done work for the CIA, the FBI, DreamWorks and Pixar, teaching people at these places the taxonomy of facial movements. When Ekman was developing FACS in the 1970s, it occurred to him that this kind of analysis could one day be packaged into computer code, a collection of algorithms that could recognize every tiny grimace, every eyebrow tilt.

“I absolutely thought it might be possible to automate this, but at that time the computer power just wasn’t available,” Ekman explains to me.

In 1985, while attending a conference on his FACS system in Wales, Ekman met a scientist who had developed one of the world’s first parallel-processing computers. One of the computer’s first applications involved algorithms that recognized human faces at a distance of 50 yards. But when people presented any kind of nonneutral mien, it created enough “noise” that the algorithms no longer recognized them. The noise of people’s emotional expressions consistently foiled the facial-recognition software.

“But that noise,” Ekman says, “was my focus.”

Ekman visited the computer lab in London and experimented with the machine’s power for a week, during which time he was able to program it to recognize several basic facial expressions. Following this experience, Ekman wrote a grant proposal for the National Institute of Mental Health to pursue the work further. But the NIMH told him, succinctly, “We don’t think computers can do what you think they can,” remembers Ekman.

Soon after that, Ekman met Terry Sejnowski, who had a Ph.D. in physics from Princeton and was a researcher and professor at the University of California, San Diego. Sejnowski helped Ekman get his study funded, and the two began to work on automating the task of reading human faces for everything they betray. Joining their project was doctoral student Marian Bartlett, who in the 1990s began to apply machine-learning algorithms to the problem.

Machine-learning algorithms can be powerful tools when unraveling large, complicated riddles for which composing enough linear programming—clear, prescriptive algorithms—would be impossible. Given a set of desired outcomes, a machine-learning algorithm will work to find the most efficient ways to reach similar outcomes with new problems. The more data such algorithms consume, the smarter they become. This is why they’re effective in teasing out nonintuitive relationships within large sets of data.

Bartlett’s use of machine-learning algorithms proved successful. Her colleagues’ and her work eventually formed the foundation of Emotient, which took its face-reading product to market in 2013, after advances in digital cameras and off-the-shelf processing power made the technology applicable to a wide audience.

At this point the software has become far better than any human at reading faces. “If you ask people to make subjective judgments on what a face is telling them, they’re often wrong. People don’t know what to look for,” says Bartlett, co-founder and lead scientist at Emotient. “But when you measure objectively, there is a huge amount of information.”

The first clients for Emotient’s algorithms came from Madison Avenue, as advertisers wanted to pair the face-reading technology with their normal practice of using focus groups to determine what kinds of new products should be released.

Procter & Gamble, for one, used Emotient’s algorithms to gauge consumer reaction to new detergent scents. P&G asked the people in its focus group to sample the fragrances and then, as is standard, had them fill out a survey of their thoughts about all the product variations. At the end of the event, the participants were allowed to take home any detergent of their choice. As it turned out, the fragrance people reported as their favorite in the survey was usually not the one they chose to take home.

Emotient’s algorithms, however, predicted which scent a person would take home with a high degree of accuracy. P&G recorded the focus group members taking their first smell of each fragrance. Initial reactions, gut reactions, were betrayed by slight changes in their facial expressions, usually lasting far less than a second, when they got the first whiff of a scent. That gut reaction, driven by the brain’s amygdala, is what dictates most decisions. The amygdala is separate from the part of the brain that drives logic and speech, which are what produce the results in participant surveys.

The idea that focus-group surveys are nearly worthless sent a shudder through the advertising industry, and delivered a regular stream of well-paying clients to Emotient ever since. Emotient’s technology has advanced to the point that it can gauge and measure every face in a frame of video. A high-resolution video clip of the NBA finals, for instance, can be evaluated by Emotient’s algorithms to determine the general disposition of the crowd during that moment of the game. It could be 100 faces or 500 faces. The algorithms see and read them all.

The technology Emotient employs has become so efficient, so fast, the company now offers access to its algorithms to anybody via the web. Users upload their videos to Emotient’s site and pay $1.99 per minute for analysis.

I felt compelled to test the algorithms on my gaping children as they watched television. It turns out even their rather stoic faces tell a big story. I recorded as they watched the beginning of The SpongeBob Movie: Sponge Out of Water and uploaded the 15-minute video to Emotient’s servers for analysis. I didn’t have to wait long. In a couple of hours Emotient’s system returned a report to me, one impressive in its depth and thoroughness.

Among the data provided to users, Emotient returns a version of the original video augmented with frames that outline people’s faces. When a person in the video looks away from the camera, the frame disappears. Next to each frame, the software displays a single word describing the emotions of that person at that precise moment. For my little video watchers, the software’s registered emotion was often “neutral,” or the same evaluation most people would make when seeing the empty looks produced by children absorbing on-screen entertainment. But at different points of the movie, cracks of feeling would flit across their faces. The same moment that scared one of my younger kids—duly noted by Emotient—instilled bemusement in the older one. The software then knit all these moments into single story lines for each child. Even a layman could see the inflection points of the movie and how they affected each viewer.

The software isn’t perfect, however. It mistook one of my daughters’ emotions for “disgust” during a 10-minute period when she put her hand on her chin and left it there. But it works well enough that we should expect algorithms to one day lurk in every store camera, every political rally, every car dealership, even job interviews—anyplace where discerning the inner reactions of people is paramount.

This reality doesn’t sit well with Ekman, creator of all the logic behind the algorithms. He remains keenly interested in the science of his system and, as an advisory board member of Emotient, holds equity in the company, but he surprises me by saying, simply, “I’m quite worried.”

He explains further: “If you’re going to analyze people’s expressions and analyze their emotions, I think you should have their consent.”

At this point, Emotient says explicit consent isn’t necessary because it keeps the data anonymous. Faces in a crowd are just faces in a crowd. But Ekman feels that reading somebody’s emotions so mechanically, algorithmically, entails a violation of privacy.

The questions surrounding this use of algorithms insert sci-fi plots into real life. Where do we draw the line? Where does the utility of code stop?

Nicholas Carr, author of The Glass Cage, worries that automation’s march has rendered us stupider, that algorithms demote humans to lever operators who let computer code do all the real work, whether in the cockpit of a plane or on the machinist bench in a factory. “Automation severs ends from means,” he writes. “It makes getting what we want easier, but it distances us from the work of knowing.”

When we surrender, as Carr says, the work of knowing, we are capitulating to the power of bots. Carr advocates for humans to spend more time at labor without the artificial proxy of software between them and the job. He points out that studies have shown that airline pilots’ skills degrade when they forfeit most of their flying time to autopilot algorithms. While it’s true that autopilots are one of the reasons air travel has become incredibly safe, Carr argues that pilots should be flying by hand more often, which would keep their skills honed and help mitigate the human errors that have led to most of the major air disasters of the past two decades.

“We can allow ourselves to be carried along by the technological current, wherever it may be taking us, or we can push against it,” writes Carr.

Such talk evokes thoughts of Star Trek’s villainous Borg, a race evolved from a mixture of man and machine whose regular advice to those it conquers, “Resistance is futile,” has become a cultural refrain.

Borg aside, the possible takeover of the world by algorithms infused with artificial intelligence has been discussed for decades. Hollywood has long been intrigued by this plot, from Stanley Kubrick’s 2001: A Space Odyssey to the Terminator franchise and, more recently, Johnny Depp’s Transcendence. Computer scientists, however, have dismissed such tales as hyperbolic and unlikely.

But in the past year, three of the biggest minds in science have separately expressed warnings about software so intelligent it could seize humanity’s place as Earth’s dominant force. Bill Gates, the most successful software entrepreneur of all time, said, “I am in the camp that is concerned about super intelligence.”

Stephen Hawking, the brilliant theoretical physicist, told the BBC, “The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”

Perhaps most foreboding are the thoughts of businessman and inventor Elon Musk, who has repeatedly sounded the alarm with tweets such as “We need to be super careful with AI. Potentially more dangerous than nukes.” Musk has consistently done things that others considered impossible: building Tesla Motors into a major force and founding SpaceX, a private company that designs, manufacturers and launches rockets into space for a fraction of the money NASA and other space-race incumbents have spent. His declared worry on the subject deserves attention.

Algorithms have already evaluated many of us to a degree comparable to that of a human psychologist’s scrutiny. And most of us have no idea it has happened. In my book Automate This, I profile how Chicago company Mattersight built a library of 10 million algorithms to categorize human speech. The company’s engineers married these algorithms with speech recognition to create a system that determines a speaker’s exact personality type and, often, what he or she is thinking. The results can be startlingly accurate. The system correctly tagged me as having what is called a thoughts-driven primary personality and a reactions-driven secondary personality.

Often when we call customer-service lines we get a recorded refrain: “This call may be monitored or recorded for quality-assurance purposes.” We assume this has something to do with training or liability. But it often means that millions of algorithms have settled in to listen to us. When the bots know our personalities, they know how to treat us to keep us happy and onboard as profitable customers. By routing our calls to operators with personalities similar to our own, the bots keep customer-service calls mercifully short—and cheap.

What was something of an experiment when I first wrote about it has become a sweeping movement within consumer-facing companies. Mattersight CEO Kelly Conway recently told me his algorithms have now profiled 20 percent of American adults’ personalities.

The Google search algorithm, perhaps the most powerful in the world, decides much of how we go about our lives and becomes increasingly tailored to our tastes the more we use it. It directs where we eat, what businesses we patronize, where we decide to live, travel, go to school, raise a family. Most people’s web interactions begin with the Google search box. What it decides to put on the first page—or prepopulate before we even finish typing our thoughts—is pivotal, whether we’re searching for good Thai food or the best ski resort for early-season snow. We needn’t know a restaurant’s name anymore, as Google’s algorithm will figure out all the details for us.

Marketers have long known that our online behavior reveals a great deal about who we are. The government knows this too. The National Security Agency, Edward Snowden told us, used algorithms to determine whether or not someone was a U.S. citizen, as only the communications of noncitizens can be monitored without a warrant. But the algorithms didn’t access data about birthplace or parents; they made this critical judgment based on a person’s browser and web-surfing histories. Faceless computer code was, in effect, the arbiter of U.S. citizenship and the right to privacy it confers.

In fact, the United States is now testing algorithms in lieu of human guards and interrogators. Not only are they cheaper than humans, but they’re better at patrolling our borders. Through kiosks installed at border crossings, the algorithms quiz travelers and analyze their answers, examining word choice and looking for vagueness, pauses and other signs of lying. The algorithms also ingest data from high-definition cameras that measure travelers’ facial expressions and eye movements. So far, the bot has proved far more effective than humans at finding liars. In a test of the technology at a Polish border crossing, the algorithms were effective 94 percent of the time in sussing out test participants who tried to get past the checkpoint with false answers and papers. Human guards who questioned the same people caught none of them.

The existence of the Transportation Security Administration in its current form was recently called into question when its agents failed 67 out of 70 tests in which workers from the Department of Homeland Security tried to smuggle fake explosives, weapons and other contraband past airport checkpoints. The TSA, already a favored target of commentators on the left and right, has never been less popular. Some, only half in jest, have suggested replacing agents with bomb-sniffing dogs. Ekman, the wizard of facial expressions, thinks the TSA is just looking for the wrong thing.

“We should be seeking out the bomber, not the bombs,” he says.

Algorithms could certainly be programmed to look for facial giveaways that indicate a person is hiding something or is on the brink of committing a violent act. They could also be employed at banks to alert guards when somebody wearing the wrong expression comes in the door.

That algorithms could best humans at jobs seemingly essential to maintaining a civil society is unsettling to many. But it’s a real trend.

The state of Missouri, searching for ways to maintain consistent sentencing and reduce the $680 million burden of housing 30,000 inmates in state prisons, in 2005 implemented what’s called the Missouri Automated Sentencing Application. Judges, prosecutors, defense attorneys and even magazine reporters can provide all kinds of inputs regarding the defendant, and the algorithm will provide data on sentences given to similar criminals in the past, along with information on the cost to the state of different sentences.

A charge of first-degree assault for a previous offender age 22 to 34 with a high school education and full-time employment produces an average sentence of 9.3 years. The system also reveals that 7.7 percent of offenders in similar cases were sentenced to probation, 11.5 percent to some kind of treatment program and 80.8 percent to prison. The costs are included, from $9,050 for five years of probation to $167,510 for 85 percent of time served incarcerated.

The Missouri algorithm used to go even further, actually providing judges with recommended sentences. Although the algorithm’s sentences were nonbinding, their existence upset enough people that Missouri legislators imposed restrictions on the system, requiring that the recommended sentences be removed from its output. “It’s a shame, because I think the more knowledge you get to people, the better their decisions will be,” says Gary Oxenhandler, a Missouri circuit court judge and the acting chair of the state’s sentencing advisory committee. “The system is there to help you make decisions. It’s a tool.”

Oxenhandler thinks letting algorithms into the courtroom, as long as they’re not given final say, benefits the legal process. Anything that lightens his load as a judge, he says, can make him more effective in sentencing the 350 to 600 felons he may be overseeing at any one time.

Scott Greenfield, a prominent New York defense lawyer whose blog has become one of the most-read legal sites on the web, finds the whole concept misguided. “Consistency here is a bit of a fool’s errand,” he tells me. “You can’t take into account the myriad differences between human beings” that should affect their sentencing. Only humans, Greenfield insists, can apply the required nuance.

The cost of prisons has become crippling for many states, including California, which released 2,700 inmates this year as part of a measure to trim spending and overcrowding. Oxenhandler thinks applying algorithms to the issue could help all states better figure out who should stay in prison and who is worth the gamble, given the savings, of being released. The time for algorithms, he stresses, is now. “As the economy gets better, people aren’t going to give a damn anymore,” he says. “If we miss our window here, they’re going to end up building more new prisons.”

In a job as important as this—deciding who is free and who is locked up—surely algorithms require some form of supervision. Humans, however, may not be best for that job. Leading computer scientists have, more and more often, looked to algorithms to police themselves.

The algorithm the U.S. State Department uses as part of its Diversity Immigrant Visa Program is supposed to pick a group of applicants at random to be awarded visas each year. In 2011, the lottery’s algorithm did not work as intended but simply awarded visas to the people who had applied earliest, in order. The visas were eventually revoked and the system was rerun. The episode devastated many people who lost what they believed to be legitimate entry to the United States.

Bad algorithms, bad code, the theory goes, can be prevented from doing damage when patrolled by algorithms designed for the job. It sounds ridiculous, but the concept is a rudimentary one within computer science. Most programmers, when creating web and mobile applications, create a parallel set of tests. The tests are, in effect, algorithms that patrol newly written code for ways in which it might break the application. More complex versions of these are known as accountable algorithms.

Computer scientist Joshua Kroll has been pestering the State Department about its visa lottery algorithm for more than a year. The government hasn’t been forthcoming about its methods, forcing Kroll to issue a formal request under the Freedom of Information Act. “They could just be using a big Excel spreadsheet,” he says of the State Department. “We don’t really know what they’re doing.”

Kroll would like to fix problems like this with accountable algorithms that ensure other algorithms do their jobs correctly. He thinks accountable algorithms could help solve thorny problems such as discrimination in job and credit markets, where things such as race and gender may be officially left out of consideration but are often inferred through indirect methods. Fairness is ultimately better determined by code than by humans, Kroll says.

A world with algorithms watching our faces, measuring our words, determining who goes to jail, who gets frisked at the airport—most of that world has already arrived or is coming. Rather than be alarmed, some of the best-informed minds on the subject welcome algorithmic rule.

David Cope is rare in that he’s renowned as an artist and a programmer. He has written reams of code in Lisp, a complicated computer language favored by developers in the AI community, while also composing operas and symphonies that have been performed by elite orchestras around the world. A leader in the creation of AI programs that compose original music, Cope has watched classical music aficionados mistake some of his algorithm’s compositions for the work of Johann Sebastian Bach.

Whereas the author Nicholas Carr argues for more piloting of planes by humans, Cope thinks the better answer, the obvious answer, is to get rid of humans in the cockpit altogether. Even Cope is surprised at how quickly algorithms have marched toward mastery of society. When I asked him three years ago if he thought algorithms could ever compose an original novel, his reply was curt: “No.” When I asked him again this year, he’d changed his mind. He’s currently working on that very thing at his home in Santa Cruz, California.

“It’s natural for humans to both fear and find disgusting matters in which a machine can do better than or replace them,” says Cope. “We’re insecure when it comes to that. When machines can play chess or create something better, it’s damned maddening. But I think we’re gaining something. We can think on a higher level. We can now have these people who have been displaced doing something more interesting.”

More interesting than driving a car, for instance, is designing the software that drives it for us. And as Google, Audi, BMW and several other companies’ self-driving autos have shown, machines have already surpassed humans in this capacity. But not everybody is a software engineer. Cope concedes that some people may lose their jobs to algorithms three or four times over their careers. The key, he says, will be retraining oneself at a higher level to use the newest technology.

“It seems to me that on every level whenever we can get a machine to do a job,” Cope says, “we should do it.”

We simply must hope that by the time algorithms are doing everything that they enjoy having their original creators—the soft, corporeal, needy and primitive versions of wetware called humans—around for company.