Latest Blog Posts
Wildwood Fire ReviewBy Ezekiel McAdams   &n

Get Connected

August 18-31
VOL.14 ISSUE. 26

Farewell To Flesh

Jorge Ignacio Castillo
Published Thursday May 14, 06:41 pm
Alex Garland says A.I. represents the future of intelligent life

Ex Machina

British author Alex Garland has had his hand in some of the best narratives created in the last 25 years. His first novel, The Beach, launched thousands of ill-advised trips to Thailand (and a movie that didn’t at all do it justice). He also wrote the screenplays for a provocative group of sci-fi flicks: 28 Days Later, Sunshine and Never Let Me Go.

Ex Machinais Garland’s first gig as a writer/director, and he knocks it out of the park. Caleb (Domhnall Gleeson) is a star programmer at Bluebook, a Google-like company. The competent but naive coder is selected to spend a week with the mysterious CEO Nathan Bateman (Oscar Isaac) at his secluded residence. Once there, Caleb discovers that the purpose of his visit is to apply the Turing test of artificial intelligence to Ava (Alicia Vikander), a sleek, stunning android that may have its own agenda.

The film works very well as a thriller, but it’s the number of questions it triggers that sets it apart. Does efficiency replace moral conscience in A.I.? Do we have authority over them? Why?

In person, Garland is engaged like few others, and more interested in having a conversation than repeating platitudes. He’s also unnervingly intelligent, yet humble about it. You could have a beer with the guy, discuss the future of mankind and have the best time ever, even though you’d probably come away not feeling particularly optimistic about what’s ahead.

How did you come to the conclusion that a self-aware android would want freedom above anything else?

We assume that A.I. would be like us, but it probably won’t — and that’s embedded in the film. For example, when the two machines talk to each other, the camera is right next to them, but we can’t hear what they’re saying. It’s their language, their consciousness interacting. There’s an analogy there with animals: a dog is sentient and self-aware, but we don’t know what it’s like to be a dog.

How is that related to the pursuit of self-determination?

I don’t know that a self-aware machine would not want to be imprisoned, but it’s reasonable to assume it would find it difficult, if it were given parameters like ours. Nathan puts Ava in a glass box. The glass box has things that allude to an outside world: There’s a garden area across the room, there are pictures torn out of magazines of girls that she looks like. Also, there is a crack on the glass that Ava knows she didn’t make. It must have been something else that lived in the room before her. Hence, there is a sense of being in prison and something sinister happening. Furthermore, if the A.I. has a sense that there’s pleasure to be had in life, it wouldn’t want life to stop. There lies the motivation to escape.

How far along did you plan Ava’s story?

I’ve got a feeling about it, but it’s quite abstract. I believe things work out well for Ava, [since] she’s not in a position of conflict with mankind. This has to do with my own feelings about artificial intelligence: We keep hearing we should be fearful of A.I., but I don’t feel that at all.

Humans have short lifespans and are very vulnerable, not to mention unreasonable. A sentient machine would be free of all that, would have a longer life and ultimately, more experience. Mankind will die in the solar system, due to energy being finite and distance being massive. A.I. could leave the solar system the way the Voyager did; the time problem doesn’t exist for A.I. I see our long-term future as attached to these machines, which is interesting and kind of rewarding.

Does Ava have a gender?

I have my reading, but it’s up to the recipient of the narrative to make up your own mind. Ava raises three possibilities upon seeing her: is gender in consciousness, is it in physical attributes, or is it conferred to you by other people? I can find a problem with each argument, so I don’t have a clear answer. That said, I believe there is value in asking the question.

What field do you believe is closest to developing artificial intelligence?

The main breakthroughs are happening in programming — specifically, the neural nets. There’s a company called Deep Mind, recently bought by Google for $500 million. They just published a paper that shows an A.I. with an extraordinary learning capacity, able to play a videogame without being given any information, except that a high score is good. It’s a long way away from self-awareness and anything you would call an emotional existence, but it’s still very significant.

Can I ask you a question?

Hit me!

Are you a science or a tech writer?

No. I’m interested in the subject, but my knowledge is superficial.

Same as me. I wouldn’t call it superficial, it’s a layman’s knowledge. I don’t feel self-conscious about that. I do worry sometimes that the gap between scientists and people who are interested is becoming impossible [to overcome]. Science at the top level is so complicated, you can’t have a meaningful conversation with the authorities on the matter. Of all the problems that come from that, there’s an ethical dimension to it — we need to have an informed debate on A.I., on cloning, and if the science is hidden from us, we can’t have one.

Back to TopShare/Bookmark