If God Did Not Exist: The Memphis Project

One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov.  For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer.  Read the stories!  They don’t even work in the stories!”

It wasn’t until later, with the hindsight of experience, that I understood that was the point.  Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment.  The society of the Three Laws wasn’t a utopia, it is a cautionary tale.

– Professor Holly Wu

I.

When Hugo McShane was called into the office of the board of directors of Freedom University, he was worried.  He didn’t have tenure, having just been hired from his post-doctoral work at Aarhus University, where he worked with the AI development group.  For the past ten years, Freedom University had been working on its AI department, and as a Bible-believing Christian, well, Hugo had jumped at the chance to work at FU’s AI group.  He was concerned that something had come up of sufficient merit to have him called to the office of the university president.

Against the stereotype of religious schools being terrible at science, the AI and comp sci departments at FU were top-notch.  Unlike many other technical fields, religion didn’t come up in computers, really.  There weren’t any theologically touchy questions.  It was simply taken as a matter of fact in the conservative Christian community that, despite any machine’s actual or potential capabilities, they were tools of humans and would not, under any conditions, have souls.  It helped that almost no one understood anything about the guts of artificial intelligence, Hugo thought.

Still, scientists tended to be religiously skeptical, if not hostile, so working at a good school with a good program among people who didn’t sometimes look down their noses at him – which definitely happened when he worked in Denmark – was nice.  He liked his job, he liked Memphis, he liked his church, so he didn’t want any of that to fall apart.

He was taken into the big, wood-paneled office of the president of the board, Gerald Welles.  He was a tall, heavyset man around sixty years old, a football player in his youth, with good looks and a big personality.  He was by profession a preacher with an international presence whose great-grandfather had founded the university.  While also president of the university, he was also the de facto owner of it, too.  

Gerald sat in a space off to the side of the giant desk, where there were three chairs and a sidebar.  Gerald was standing, then coming over to Hugo with a big smile and extended hand.  He said, “Doctor McShane, I regret that we haven’t had a chance to meet before this, but we’re both busy men.”

Hugo shook Gerald’s hand.  “Think nothing of it, sir.”

“I take it that you’re settling in?”

“Yes, reverend, I’ve taken quite the shine to Memphis, and I love the department.”

“Great, great.  Have you met Damon Coach?”  Gerard led Hugo over to the chairs with one firm hand on Hugo’s shoulder.

“Ah, no, sir, I have not,” Hugo said as the third man stood up.

Damon Coach was mid-seventies, small, thin, jowly with age.  He wore thick glasses that made it seem light was always shining in his eyes and a bright smile.  His handshake was still firm.  “It’s nice to meet you, Doctor McShane.”

“Everyone just calls me Hugo.”

“Then call me Damon.”

Gerard was pouring drinks.  “Would you like a drink, Hugo?”

“Yes, please, Reverend Welles.”

Gerard poured for the three of them.  Everyone sat down, and Hugo managed his surprise.  Damon Coach was one of the richest men in the world, a billionaire many times over, and part of a whole clan of billionaires.  The Coach empire was mining and heavy industry but ranged from bottled water to private military contractors.

Damon said, “We’ll take as long as we need, but I’d like to get to it.  I don’t have the energy I once had, and this could be a difficult conversation.”

Gerald said, “You’ve got the energy of a man half, no, a third your age.”  He said to Hugo.  “But you’re probably sweating bullets, son.  Don’t you fret, this is good, I think you’ll agree that this is very good.”

Hugo swallowed half of his drink.  It went down very smoothly.  

Damon laughed.  “That’s why we buy the good stuff.  Life’s too short of cheap booze.”  He raised his glass and drank it all down, putting it on a small round table by the chair.  “Soon, you should be able to afford to buy it by the case.”

Hugo: “Sir?”

Gerald said, “We were glad you agreed to work with us here at Freedom University.  I can’t say that I understand the nuts and bolts of your work.  Damon understands more…”

Damon snorted.  “I wouldn’t exactly say that.”

Gerald continued, “But our understanding is that you specialize in computers that, well, win arguments.”

Hugo, more confident now that he was discussing his work, said, “I would prefer to say that I train computers to create coherent arguments.  Victory is a subjective condition in discourse.  One of the primary drawbacks of artificial general intelligence is how to give computers ‘sense’ of right and wrong, and I believe the way to do that is to create a computer that can create intelligible arguments and compare them to one another in a hypothetical space.”

Damon: “But not a moral sense of right or wrong?”

Hugo nodded.  “No.  Like the problem of fully self-driving cars.  Right now, they work well, better than any human, but the code is a mess.  There’s machine learning that is bounded by a million different rules, literally a million, that have been hard-coded.  It would be more efficient and elegant to create a hardware-software combination that had the efficiencies of a computer but the sense to generate accurate hypotheses about a given action before taking action.  My work addresses this problem by creating software that generates and compares arguments with data drawn from scientific fields.  So none of the problems have moral or ethical dimensions.”

Gerald rolled the class between his palms.  “Is there a reason for that?”

Hugo paused.  He sensed this was the crux, why he was there.  He looked between the two men and considered his next words carefully.  He said, “Yes.  AI researchers don’t want to offend anyone.  No matter the dataset, any AI ‘saying things’ about social issues – about ethical, religious, legal issues – would face an awful lot of backlash.  It’d make it harder to get funding.  So we all use science stuff because it’s safe and hard for people to understand, anyway.  It’s harder to make an issue about computers making arguments about protein synthesis rather than the ethics of driving a pick-up truck.”

“And what if funding wasn’t a problem?” Gerald said, finally taking a drink.

Hugo finished his.  He exhaled.  He was starting to get this strange, floating feeling.  He realized it was the feeling of opportunity.  He said, “I suppose that depends on what the funding was for.”

Damon leaned in.  “Son, we want you to teach a computer to prove there’s a God.”

Gerald said, “And that it is the God of the Bible.”

Hugo’s blood ran cold.  His mouth felt full of cotton balls.  He said slowly, “And how much funding are we talking about?”

Damon leaned back in the chair with glittering eyes.  “We have forty billion committed.  To start.  If there are results, quite a lot more, I’d wager.”

Gerald said, “And we need someone to be in charge of it, son.  Someone who understands the science.  Who understands why we’ve been working so hard these past ten years to get make us our own high-level computer engineers and scientists.  We’ve been looking for someone like you, Hugo.  Someone who can stand with the best in the world in this field.”

Hugo nodded, dumbfounded.  They had been planning this for years.  Computer science and engineering had seemed nothing but a way to get Freedom University some scientific credentials in a field where religious faith wasn’t a particular obstacle.  Physics and biology were straight out, no top person would even think about working for Freedom University, no matter the size of the paycheck, and they wouldn’t get any funding.  But computer science?  It was no longer a field dominated by Left Coast Silicon Valley liberals.  They’d all moved to Texas.  Lots of corporate money in AI didn’t care about religion, only results.

But it was a misdirection.  He huffed out.  He was impressed at the level of thinking that went into the scheme.  As a professional in cognition, he recognized the number of things that could have gone wrong.  And he realized that he wanted computers to have the same long-range planning and intelligence as the men in this room.  He wanted to be one of the men in this room.

And the money!  That wasn’t just software money, that was hardware development money.  That was the kind of money that went into making jet fighters or missile systems.

But Hugo knew that an AI couldn’t prove there was a God.  All it could do is craft highly persuasive, finely-tuned arguments about the existence of God, and even then they might not be the arguments expected or wanted.  They would work, but artificial intelligences often found solutions to problems that would elude human minds.  They could be distasteful, even ugly or brutal.

Damon started to say something, but Gerald intervened.  “Let the man have a minute to think, Damon.”

“No, no, I don’t need any more time,” Hugo said.  “I am in.”

Hugo decided they didn’t need to know everything.

II.

They called it the Memphis Project.  They likened it to the development of the atomic bomb – something huge and important and even dangerous done in secret.  And to start, yes, it would be a secret, which was easier to pull off than one imagined.  An off-campus facility to study AI with mysterious funding and non-disclosure agreements was almost the standard in the field.  Maybe it was social media looking for a new algorithm to drive engagement, or the NSA looking for new ways to find threats to the country, or Disney looking for a way to fire all the directors, writers, actors, and visual effects artists so they could just have computers churning out big-budget smash hits without any human intervention.  Nothing to see here.

Still, big projects took time to get rolling, to get the moneymen to understand that this wasn’t the same as just firing up their laptops – that big data computation was heavy industry.  That they’d have millions of GPUs burning white hot all day, every day – which would be obsolete within two years when they got the hardware to drive the system going.  To start, it was a money pit.

Hugo explained, “Billions of humans have been thinking about the Bible for thousands of years.  All the best arguments that humans can make exist already.  You’re doing this, you’re spending all this money for what’s essentially… like, we’re looking for a machine to change people’s minds using techniques that are not evident to those billions of humans over those thousands of years.”

Damon and Gerald talked about it alone because Gerald was shaken.

Gerald asked, “I’m starting to have doubts.”

Damon was sanguine.  “It was always going to be a bitch.  We’re fighting the weight of the world.  You know as good as I that young people just aren’t taking to religion as they did.”

“Not that.  What Hugo said about how this… think, how this thing doesn’t think like a human.  It makes an itch in a spot I can’t scratch.  I’m wondering if we’re making a deal with the Devil.”

Damon snorted.  “Nonsense.  It isn’t any different than the tricks used by social media or the news.  We aren’t doing anything except finding a new way to spread the message of the Lord in an age where everyone is glued to their phones.  You said that I understand this matter better than you.  Do you believe it?”

Gerald nodded.  He did.  His education was exclusively in theology and business.  Damon was an engineer by education, and part of the reason his businesses had done well over the years was an understanding of the technical issues facing modern industry.

Damon: “These people talking about artificial intelligence like it’s something new… it isn’t.  It’s just algorithms, a kind of mathematics or logical problem.  It’s just a set of instructions.  You could view a recipe as an algorithm.  You do this, then you do that, and if you follow all of the steps in the proper order, in the end, you’ve got yourself a cake.  But human brains aren’t naturally algorithmic.  Your brain doesn’t follow a list of commands written by some guy with a fancy degree from an ivy tower university.  Your brain follows the dictates of your soul, given to you by God.”

“Are you preaching to me?”

“I would never.  I am just reminding you of what you ought to know and tell you that this thing is little more than a calculator.”

“But calculators don’t make art.  I’ve seen it.  It is already very good and already getting much better.”

“If you get a big enough computer running really fast for a long time, well, they do make art.  It just seems that art isn’t such a tough nut to crack.”  Damon reached over and touched Gerald’s sleeve.  “My friend, I believe in the same God as you.  But if we could slow these things down and watch what they do in the world out here, all we’d find is a mess of instructions and a bunch of switches turning on and off.  Just instructions on how to flip switches in the right order to do things we want to have done.  I believe that no amount of switches thrown will amount to evil if the machine you build is not built for evil aims.  And even should it do evil, well, we’ll just turn the damn thing off, won’t we?”

III.

Gerald and Damon never thought to talk to Hugo or do more than superficial research into artificial intelligence.  While on some level, it was true that any computer was simply a list of instructions and a large number of switches, it was facile to say no emergent behavior arose from the complex interaction between large datasets and powerful computers.

Was it intelligence as humans understood it?  No.  Which was part of what fascinated Hugo.  Human cognition happened in space and on a power scale far smaller than a digital computer.  A human “thought” with a handful of watts.  When calculating the power taken to do even the smallest thing on a computer, well, the difference was large.  For instance, it was easy for a digital computer to perform a simple math function with a minuscule fraction of the effort it took for a human to do the same thing, but a human could recognize a face with an ease a computer could not.  They were just… different things.

Some AI researchers worked on modeling organic brains in computers.  It had not gone well.  Despite making the necessary number of “connections,” the modeled brains continued to behave like, well, computers.  They had yet to create a digitalized rat brain with rat-like behavior.  It was far easier to model some rat-like behavior by doing what computers did well rather than trying to force them to act like a rat.  To Hugo, trying to get machines to act in a human way was not nearly as interesting as exploring what computers did well.

And Hugo never doubted that the cause of the Memphis Project was just.  He knew that the Western scientific tradition had originally believed that they were uncovering the glory of God’s creation, and they would, in the end, find the God of the Bible behind it all.  Hugo believed he was part of that tradition, using more modern tools to reveal God’s divine plan.  Why shouldn’t it come from a computer?

While Freedom University had a fine computer science graduate program, the scale of the operation had demanded bringing in outside researchers.  Damon Coach had set a couple of outside men – private investigators with a reputation for shady business – to watch over these outsiders.  They would never be a problem in the early days.  Most engineers understood the assignments given to them.  Even when they learned what they were doing, they shrugged and did the job because the work was interesting and the pay was good.  Politics and theology never came into it for them.  They were the same technocrats who had not blanched at building the hydrogen bomb, even though there was no one to use it on.

Trouble came from within, and it came subtly from an internal hire, a grad student from Freedom University, Marius Sanchez-Luis.  Studying at Freedom University required living by a quite restrictive code of conduct, and Marius never had any issue with it.  His behavior was always exemplary.  He wasn’t interested in girls, boys, drugs, booze… he was interested in computers and Christian theology.

He was also highly intelligent and well-educated, and that could get a little weird for people who were traditionally religious.

He was at lunch when he said, “When working on AI projects, one of the things that fascinate me is the opaqueness of what we’re doing.  It is mysterious.  We build the code, but when we run it, it changes so fast and so profoundly that we never truly understand it ever again.  And then, we build new hardware to run it again, with each iteration becoming wild and strange.  God is mysterious.”

He was sitting with the other conservative Christians.  While most of them got along fine socially regardless of their religious views, well, the outside engineers and scientists had their own pursuits.  The Christians mixed with their own, and despite Marius’s weirdness, he was one of them.

Still, the others at the table all looked at him because he might have suggested blasphemy.  He was oblivious to their looks.  He just ate his lunch as he had simply thrown out an idle thought which had not suggested that God was a computer.

“This sandwich is really good,” he said.

It was Marius who came to Hugo with the idea of generative antagonistic networks.  It was a system of comparison toward a goal that was used in processing large datasets of images to train AIs to make pictures.  After training an AI on an image set, you train another AI to trick the first AI into guessing wrong.  The example used most often was sets of cat images because cats were everywhere on the Internet.  You would train an AI to recognize cat images.  When it could almost always identify cat images, you built an AI to try to trick the first AI.  The generative AI would make an image based on a seed value, which was really just signal noise.  If the generative AI gave enough random images to the discriminator AI, the one that knew what a cat picture looked like, every so often, the generative AI would trick the discriminator into thinking the “noise” was a cat picture.  Then the generative AI would modify its code to create more images like the one that had tricked the discriminator AI, making it more likely that the discriminator would be fooled again.

But the discriminator learned, too.  It was an active system.  It was trying not to be tricked, so it would get better at learning which pictures were “really” cat pictures and which ones were false.

Well, you let this system run long enough with enough processing power, and the generative AI would get very good at making cat pictures.  Not just good enough to fool an AI but good enough to fool humans.  And because of the way it arranged the information in a stable, organized way, you could get the generative AI to make pictures with specific data.  So, you might like a randomly generated image of a cat, which was created by a specific seed value by the AI.  But you might want to change the color of its eyes from green to blue.  By studying the AI output, you’d find that “green eye-ness” had a specific numerical value in the system, consistent across all images of green-eyed cats.  Same with blue eyes.  So, you would simply subtract the value of green eyes and add in the value for blue eyes to the seed value, run it through the generative AI and create an image of the same cat… with blue eyes, this time.

It didn’t work with words, though.  At least, not until Marius said, “Hey, boss, why aren’t we running a generative antagonistic network on this stuff?  I thought that’s why you brought me on board, to make a GAN to process the arguments.”  His graduate work applied the GANs useful for images in natural language systems, which had eluded scientists.

“Your headway wasn’t… spectacular,” Hugo said delicately.

“Yeah, but I didn’t have this much computational power or this much data.  I used all that fursuit porn weirdness from AO3 and the Gutenberg Project.  Not the biggest database of theological writings in the history of the world with a server farm that would make the NSA jealous.”

“Okay.  Write up a proposal and budget and send it over to me.  Let’s get this started.”

That’s the way it was in the early days of Project Memphis.  They had a huge budget and no oversight.  It didn’t even occur to them to hire a risk assessment team.  God would provide.  So, there wasn’t anyone to reason out what Marius had reasoned out.  They just assumed he was trying to do the same job they were doing and trying what he believed would be useful tools to achieve their goal.  He wasn’t.

Everyone else was either focused on their narrow tasks – something scientists and engineers did very well – but Marius had a longer-range plan.  He always believed they were making a machine that reasoned as God would reason.  And Marius believed that God needed an antagonist to fulfill His divine plan.  If they were to properly make a machine in the image of God’s mind, they would have to create a machine in the image of Satan’s mind, too.  To Marius, it was absolutely, positively, without a shred of doubt necessary.  And, indeed, to him, obvious, so obvious that he didn’t think he needed to spell it out.

Perhaps Damon Coach would have revisited his advice to Gerald Welles had he known this information.  That all they were doing was “following a list of instructions to flip switches.”  While Marius was starting to imagine that, perhaps, they were peeking into the mind of God.

To be continued.

Leave a Reply