I awoke in the Midsummer not to call night, in the white and the walk of the morning:
The moon, dwindled and thinned to the fringe of a finger-nail held to the candle,
Or paring of paradisaical fruit, lovely in waning but lustreless,
Stepped from the stool, drew back from the barrow, of dark Maenefa the mountain;
A cusp still clasped him, a fluke yet fanged him, entangled him, not quite utterly.
This was the prized, the desirable sight, unsought, presented so easily,
Parted me leaf and leaf, divided me, eyelid and eyelid of slumber.
Shortly after noon on July 20, 1969, as they orbited some 70 miles above the surface of the moon, Neil Armstrong and Buzz Aldrin detached their lunar lander from the Apollo 11 command module in preparation for descent. From his window aboard the command vessel, Michael Collins watched as the lander rotated away and pitched itself downward. In the lander’s cramped cabin, Aldrin and Armstrong could see the moon’s surface through small triangular windows. At elbow level was the console for the device that would direct the final stage of their approach: the Apollo guidance computer.
For most of the trip, the astronauts had been passengers. The spacecraft had been guiding itself, relaying its position to Mission Control’s IBM mainframe—a contraption the size of a walk-in freezer, which in 1969 was what people thought of when they heard the term computer. Something called a “minicomputer” had recently been introduced; it was the size of a refrigerator. The Apollo guidance computer—there was one on board the command module and another on the lander—was a fraction of that size. At just 70 pounds, it was the most sophisticated such device humanity had yet conceived.
Instead of bulky vacuum tubes, the Apollo computer used thin slices of silicon called chips. Each chip contained a pair of logic gates, and each gate was a simple electronic switch that monitored three inputs, and turned its output to “off” if any of the inputs were “on.” Some 5,600 of these primitive integrated circuits, arranged in a sequence, formed the digital cascade that was the computer’s brain. It was mounted in a hardened metal container on the wall behind the astronauts, then connected by wire to the console in front of them.
The chips had been designed by Fairchild Semiconductor, a technology startup in Palo Alto, California. In the early 1960s, the computing industry was decentralized, with research conglomerates like Bell Labs and MIT dominating on the East Coast; Fairchild was an outpost on the Western frontier. The Apollo program had breathed life into the fledgling company by ordering hundreds of thousands of Fairchild components. The demand for miniaturization had led Gordon Moore, Fairchild’s head of R&D, to hypothesize that the number of components on an integrated circuit would double every year. NASA had pioneered the use of silicon, and the computer on the wall behind the astronauts was Moore’s law’s proof of concept.
The computer’s console, with its numeric keypad, resembled that on a microwave oven, and its small readout screens cast an eerie green light from below. Aldrin managed the device by punching in two-digit commands he had memorized. In response, three small panels displayed five-digit codes that he’d been trained to interpret.
In that critical moment, hurtling like a lawn dart toward the surface of the moon, the Apollo guidance computer had crashed.
As the astronauts began the first stage of their descent, the engine ignited and the computer slotted the lander into an elliptical orbit that brought them within 50,000 feet of the surface. From there, Aldrin keyed in a new program, dropping the lander from orbit into a contact course with the moon.
For the next three minutes, the cratered lunar landscape grew closer, until, at around 46,000 feet, Armstrong rotated the vehicle, pointing the landing radar toward the surface while the astronauts turned to face Earth. The moon’s gravity is irregular, and to account for this, the astronauts had to take new measurements. With the void outside his window, Aldrin punched in a request to compare the lander’s calculated position with the reading from the radar.
He was answered by a klaxon ringing in his earpiece. Aldrin hurriedly keyed in the two-digit code 5-9-Enter, which translated, roughly, as “display alarm.” The console responded with error code “1202.” Despite his months of simulations, Aldrin didn’t know what this one meant; Armstrong, equally baffled, radioed Mission Control for clarification. The stress in his voice was audible, but only later would the two men learn how bad things really were. In that critical moment, hurtling like a lawn dart toward the surface of the moon, the Apollo guidance computer had crashed.
Several years earlier, Hal Laning, a computer scientist at MIT’s Instrumentation Laboratory in Cambridge, Massachusetts, had been asked to design the operating system that would fly men to the moon. He was bound by novel constraints: To save time, Apollo’s operating system would have to process inputs and deliver outputs without noticeable delay. And to stick the landing, it would have to be resilient enough to recover from almost every mode of error, human or otherwise.
Laning’s colleagues spoke of him with awe. His office was adjacent to an air-conditioned room that housed two giant mainframe computers, which took up much of the first floor of the building, and which he oversaw in the manner of a doting parent. The programmers interacted with the computer via a desk-sized control panel. When they got stuck, they went across the hall to interact with Laning. Computer code was not displayed on a monitor—there weren’t any—but instead printed onto reams of oversize paper called listings, which the programmers hand-edited with a marker. Laning’s office overflowed with these listings, making it difficult for his supplicants to find an open chair
Laning had set the paradigm for computing once before. In the 1950s, he started programming MIT’s first digital computer, which had just been completed. Doing so required complicated mathematical notation, and, seeking to reduce his workload, Laning devised an assistant called “George,” which translated higher-order algebraic equations into language the computer could understand. This early compiler helped inspire Fortran, which in turn spawned most major computer programming languages used today.
Working on Apollo, Laning did it again. Drawing from intuition, with no historical examples as a guide, he determined that each program in the Apollo operating system would be assigned a priority number. Jobs like guidance and control would be given low numbers and run as constant background processes. These could be interrupted by higher-priority jobs, like data requests from the astronauts. The result was a virtual parallel processor that could run off a single central processing unit.
Having drafted the prototype, the sensei retreated to his chambers; Laning’s protégé Charles Muntz took over much of the actual programming. One concern about Laning’s scheme was that a surplus of interruptions might clog the CPU, like a juggler thrown too many balls. Muntz devised a solution he called restart protection. If an unmanageable number of jobs was sent to the processor, certain protected programs would spit their data into a memory bank. The processor queue would then reset, and the computer would restart immediately, resuming the protected tasks and abandoning the rest.
Once Muntz’s team was finished, the operating system was assembled on a mainframe, then printed out as a sheaf of instructions, which were brought to a nearby facility managed by the defense contractor Raytheon. Converting the code to machine-readable binary meant threading bits of copper wire through magnetic cores on a kind of loom. Most of the weavers were women, whose progress was measured bit by bit: A wire that threaded through a magnetic core was a 1; a wire threaded outside of it was a 0.
A completed bundle of wires was called a rope. Once all of the ropes containing the operating system were completed, they were plugged into the computer and run through a battery of tests. Error 1202 signified that the processor was overloaded and that Laning’s scheme had forced a restart. In the months preceding the Apollo 11 launch, computer scientists had deliberately triggered numerous restarts in simulation. The operating system had never failed to preserve the critical data.
Armstrong and Aldrin didn’t know that. On the lander’s control panel, above the computer’s console, was a circular button marked ABORT, which, when depressed, would cleave the spacecraft in two, blasting the ascent module back into orbit while sending the remainder hurtling into the moon. The two men had trained for a computer error scenario; they’d worked the console in their simulator at Cape Canaveral so hard they’d nearly wiped the labels off the keys. But there were dozens of possible error codes, and the astronauts hadn’t memorized them all. Some could be overridden with a “go” command; others called for an “abort.” It was up to Houston to make the call.
When Mission Control heard Armstrong’s tense request for information, a well-rehearsed sequence of events played out. Gene Kranz, the flight director, delegated the decision to Steve Bales, the guidance officer; Bales turned to mission specialists Jack Garman and Russell Larson, who consulted the handwritten table of error codes Garman had compiled. Together, Garman and Larson confirmed that error 1202 meant the computer had managed to save the lander’s navigation data before croaking. This scenario was a go.
But what if the computer continued to behave unpredictably? In addition to running the spacecraft’s guidance and navigation systems, the computer assisted Armstrong with steering and control. Below a certain altitude—100 feet or so—an abort was no longer possible, and Armstrong would be forced to attempt a landing even if his computer was malfunctioning. He had little margin for error. On a hard crash landing, the astronauts might be killed; on a not-so-hard crash landing, the astronauts might survive, only to be stranded on the moon. In this nightmare scenario, Mission Control would bid Armstrong and Aldrin farewell, then cut communication as the two prepared to asphyxiate. Michael Collins, in the command module, would make the long journey back to Earth alone.
Imagine pulling the plug on the moon landing. Imagine not pulling the plug, then explaining to a congressional committee why two astronauts had been killed. Jack Garman, 24 years old, gave the go-ahead. Larson, too scared to speak, gave a thumbs-up. Bales made the final call. “It was a debugging alarm,” Bales told me recently. “It was never supposed to occur in flight.” Bales had a monitor in front of him, with a digital readout of the computer’s vital signs. They appeared unaffected. He said, “Go.” By the time Houston relayed the message to Armstrong, almost 30 seconds had passed.
Armstrong resumed assessing the course. Apollo 10 had reconnoitered the landing area, and Armstrong had spent hours studying those photographs, committing landmarks to memory. He’d noticed earlier that his trajectory was a little long, but before he could fully react, Aldrin queried the computer for altitude data. As before, he was answered by an alarm. The computer had crashed again.
Back at MIT, dozens of people were crowded around a squawk box with an open line to Mission Control. Among them was Don Eyles, 26 years old, who, along with his colleague Allan Klumpp, had programmed the software for the lander’s final descent. The first restart had alarmed Eyles. The second terrified him. This was not just a glitch but a series of glitches, and he worried that Mission Control didn’t fully understand the consequences.
This phase of the guidance program consumed about 87 percent of the computer’s processing power. The request from Aldrin used an additional 3 percent or so. Somewhere in the middle, a mysterious program was stealing the remaining 10 percent, plus a little more, overloading the processing queue and forcing the restarts. The next phase of the landing was still more computationally demanding, and during that phase the computer would crash even without Aldrin’s input. “Some dreadful thing is active in our computer, and we do not know what it is, or what it will do next,” Eyles wrote of this moment in his memoir.
In Cambridge, Eyles stared at his colleagues in dismay as Mission Control authorized the second go command. Eyles was out of the command loop, but he knew how the computer worked better than anyone in Houston. It might keep restarting, and the closer Armstrong and Aldrin came to the surface, the worse the problem could get. What Eyles deduced in that terrifying moment he would not reveal publicly for years to come: To him, this scenario was not a go. It was an abort.
In the next three minutes, the lander dropped roughly 20,000 feet. Scanning the moon’s desolate surface, Armstrong began to make out features in the lunar plain. (Apollo planners had timed the landing so the sun would cast long shadows on the rocks.) The computer automatically entered the next phase of the descent, followed by another reboot and another go command from Mission Control until finally, at less than 2,000 feet above the lunar surface, the computer had its worst crash yet.
The alarm blared and the lander’s readout went dead. For 10 long seconds, the console displayed nothing—no altitude data, no error codes, just three blank fields. Armstrong’s heart began to race, rising to 150 beats per minute, the same as that of a man at the end of a sprint. With the moonscape zipping by outside his window, he was the closest any human had ever been to another world, but, like a distracted driver, his attention was focused on the computer. Finally the console came back on line. Mission Control confirmed: It was another 1202. “I never expected it to come back,” Armstrong later said.
The alarm subsided, but just seconds later came another reboot, another dropout of the display, this last one just 800 feet or so above the surface. That made five crashes in four minutes, but the go commands from Houston kept coming. The controllers had put their faith in the box on the wall. “An abort is not that safe either, and the lower you go, the less safe it becomes,” Bales told me. “There was an unspoken assumption, I think, that anywhere below 1,000 feet, Armstrong was gonna take a shot at it.”
Mission Control went quiet; there was nothing useful left for them to say. Armstrong, following protocol, assumed partial control via stick. This reduced the processing load, ending the errors, but the distractions had led Armstrong to overshoot the designated touchdown corridor by several miles. The long hours he’d spent memorizing the Apollo 10 photographs were wasted. Armstrong was going to have to eyeball it.
The Sea of Tranquility, he could see, was a misnomer; up close, the moon looked as if it had been used for target practice. Armstrong flew the lander almost parallel to the surface, passing over a large crater and an unsuitable field of rubble before spotting a flat expanse of powder. Aldrin consulted the computer for the data that would help them navigate the tricky final seconds of the landing. He had no way of knowing if it was going to go blank again.
Armstrong had had his wings clipped over Korea; he’d bounced an airplane off the upper atmosphere; he’d rescued Gemini 8 from a violent zero-gravity spin. Now he was piloting a malfunctioning spacecraft to touch down on an alien world.
Just 40 seconds after the computer’s final restart, he slowed the lander’s forward momentum, then rotated the legs toward the surface. As the engine kicked up a blinding cloud of dust, Aldrin read aloud a steady stream of figures from the console. With almost no fuel to spare, the lander dropped, in slow motion, to kiss the surface upright, and the particles of moondust hung suspended in the sunlight until the gentle lunar gravity pulled them back to rest.
Back on Earth, the computer scientists scrambled to figure out what had caused the processor overload. Aldrin and Armstrong were walking on the moon, but if their computer kept crashing they might have a hard time getting back. They had about 13 hours before the astronauts were to blast off in the ascent module.
The MIT team located the source of the error with only two or three hours to spare. In anticipation of a possible abort, Aldrin had insisted that the spacecraft’s rendezvous radar remain turned on. This system pointed upward, allowing it to track Collins in the command module. During the descent, the dial for the rendezvous radar had been turned to the wrong setting. Normally, this shouldn’t have caused a problem. But because of a design defect, every once in a while the system would bombard the computer with unnecessary requests. It was the worst kind of error: erratic, subtly dangerous, and difficult to reproduce.
“This was the first time men submitted to ride in a vehicle controlled by a computer.”
The Apollo 11’s rendezvous radar system triggered this rare error, and during the most difficult portion of the landing, 13 percent of the computer’s resources had been stolen by an antenna pointing at the sky. Fortunately, the programmers considered the stray requests expendable, and with each restart, they had been temporarily dismissed. Instead, the computer had focused on the critical tasks of navigation, guidance, and control. These, Apollo programmers had determined, were the most important of all programs, trumping even the software that ran the display. When the computer had blanked out the registers, it was trying to preserve the precious navigation data that told the spacecraft where to go. Laning and Muntz’s scheme, woven into incorruptible rope, had saved the touchdown.
Before leaving the moon, on orders from Mission Control, Armstrong and Aldrin turned the rendezvous radar’s knob to the correct position and, for good measure, cut its power supply. Having implemented this crude fix, they blasted off to lunar orbit, leaving behind the empty lower half of the lander and knocking over the American flag they had planted in the lunar surface. They reunited with Collins, then, three days later, splashed down in the Pacific. Upon their return, the Apollo program was showered in glory. Aldrin became an advocate for the exploration of Mars; Armstrong moved to Cincinnati. Collins wrote a memoir, in which he acknowledged how dangerous the mission had been. “If they fail to rise from the surface, or crash back into it, I am not going to commit suicide,” he wrote of watching Armstrong and Aldrin prepare to ascend. “I am coming home, forthwith, but I will be a marked man for life, and I know it.”
The reclusive Hal Laning, having conquered spaceflight, moved into 3D modeling. The operating system he devised was ported from Apollo to the Navy’s F-8 fighter jet, proving the feasibility of computer-guided flight control. Gordon Moore, who had observed Apollo’s insatiable demand for miniaturized silicon chips, left Fairchild to cofound Intel. In 1971, Don Hoefler, a correspondent for Electronic News, wrote a series of articles surveying the dozens of Bay Area companies that had sprung up in Fairchild’s wake. It was titled “Silicon Valley, USA.”
Finally, there was Don Eyles—the man who would have scrapped the mission if only he’d had the authority. I caught up with him in April, after he’d had 50 years to reflect. Had Mission Control made the right call? “I think from our point of view, at MIT, something was missing inside the computer, something unknown was seriously affecting our software,” he said. “But maybe we knew too much! Those guys could only see it from the outside. In a way, it was easier for them, and I think they got it right.” He paused for a moment. “Anyhow, the mission landed, so they must have got it right,” he said.
Eyles then made another point: “This was the first time men submitted to ride in a vehicle controlled by a computer.” In the most critical phase of the descent, that computer had suffered five unplanned restarts in four minutes, but from the perspective of operating stability it had performed better than its programmers thought possible. Apollo launched six more missions, but public interest waned. Perhaps the program’s true legacy is etched not in moondust but in silicon. Aldrin and Armstrong got the glory, but housed in a metal box on the back wall of the lander was the blueprint for the modern world.
In his new book, God, Greed, and The (Prosperity) Gospel: How Truth Overcomes A Life Built on Lies, Costi Hinn explores, explains, and exposes his journey from anointed heir-apparent in the Hinn family ministry to conversion to true faith in Jesus Christ. Not only does Hinn relate his rescue by the power of God from a false gospel, he examines the Prosperity Movement in the light of Scripture while equipping believers with ways to reach their loved ones held captive in this false theology.
God, Greed and The (Prosperity) Gospel begins with Costi’s own testimony. As the nephew of Benny Hinn, he is able to provide an inside look into the Prosperity Movement and the personal perks that come for those who propagate it. These promises of health and wealth are also for those who “have enough faith” or those who “sow a seed” into the movement itself. But Hinn came to see the distortion and destruction of such theology, and he explains the Gospel of Prosperity and its promises to meet physical needs while neglecting the true spiritual condition of those who desperately cling to it. Hinn gives a solid biblical perspective on health and wealth and concludes with practical help, sometimes using his own personal experience, for those who are coming out of, or have loved ones trapped in, The Prosperity Movement.
God, Greed, and The (Prosperity) Gospel is poignant and powerful with personal testimony and solid scriptural analysis, yet it is written simply and succinctly. It is biblically sound, brutally honest, theologically accessible and thought-provoking, but without bitterness, malice, or self- promotion. Hinn’s passion for the truth of the Gospel is matched by a fierce love for his family. Every word of this book is seasoned with transparency, humility, grace and compassion. God, Greed, and The Prosperity Gospel is a helpful, essential and timely edition in view of today’s Christian landscape, and an incredible reminder of God’s saving power. I highly recommend it.
I was given a free copy from the publisher in exchange for my honest review.
In Jocelyn Green’s Between Two Shores, Catherine Duval is determined to remain neutral, despite her French heritage, in The Seven Years War, a conflict that seems interminable. She must trade with the British if her family is to survive. But when her long-lost fiancée is taken prisoner, Catherine is forced to make a choice that threatens all she holds dear. Will this decision bring the peace she so desires? Or will it cost her everything, including her life?
Between Two Shores is captivating historical fiction. The narrative is rich with time-period detail, vivid description, and compelling characters who display complexity, authenticity and depth. The plot has incredible twists and turns and leads to an unexpected climax. Green’s writing is exceptional, eloquent, and exquisite, and the reader will remain riveted until the last page.
Between Two Shores is brilliant, beautiful, and one of my favorite fiction reads of the year. I highly recommend it.
I was given a free copy from the publisher in exchange for my honest review.
In his book, Gray Day: My Undercover Mission to Expose America’s First Cyber Spy, Eric O’Neill chronicles his part in the quest to take down the worst Russian mole in U.S. History, Robert Philip Hanssen. Hanssen, an FBI agent who is suspected of spying for the Kremlin for nearly two decades, is set to retire in three months. The Agency has tasked O’Neill with shadowing and gathering evidence to convict Hanssen of espionage. O’Neill tells of the cat-and-mouse game employed to corner and apprehend the canny and diabolical spy that cost the U.S. in intelligence and lives.
Gray Day is a well-written account of secrecy, subterfuge, and suspicion, and O’Neill gives an inside look at the operational detail of one of the most daring and intriguing cases in FBI history. It is an enlightening and informative look at the depravity and hubris of human nature as law enforcement endeavors to entrap one of their own. O’Neill opens up about the toll that the case took on his personal life as well as his professional life. The narrative is both exciting and frustrating, as one of the worst traitors in history is able to elude capture for so long and is finally made to face justice.
Aside from a dense last chapter about cyber-security and a few expletives, Gray Day is a page-turning, tension-building journey into the world of covert espionage. I highly recommend it.
Trend-lists in publishing always include historical fiction. It’s hot and has been for quite some time. There’s something about reliving the horrors or glory of the past, all the while knowing the outcome, that’s satisfying. Experts call it post-traumatic exposure therapy. Yep. Reading historical fiction is good for you. It’s therapy. And you can include historical fiction in your contemporary novel.
What is the Recent Historical Novel?
Historical Fiction is usually written in a time period outside the oldest working generation’s memory. World War II is a good example, and Vietnam is rising as the generation reaches retirement age. Both wars are pillars in time that hold up today’s society. Those time periods, and before, define historical fiction.
Recent Historical Novels are newer events. They include characters living in, or flashbacks, or even thoughts on a catastrophic event that is embedded in the collective memory of a nation or globe. A few novels I read last semester merge such events as Kennedy’s assassination or the Challenger explosion, 9/11, O.J. Simpson, Fukushima Daiichi nuclear plant, Katrina, Mount St. Helens, and the death of Osama Bin Laden.
These events aren’t used for period pieces. So, how do recent historical novels work?
The Human Element
We need news. And we get news. 24/7, as a society, we watch events unfold. We discuss Kennedy with our friends. We remember where we were when the Challenger burst into flames. Due to the globalization of news, we experience hurricanes, tornados, and tsunamis as they roar. Once finished, we need to relive the event, remember the horror, make sense of the pain and destruction. Through this social connection, we learn how to respond when the next event crashes upon us. Passing information—debriefing—is a powerful tool to strengthen society.
Contemporary Novel and the Past: How the Recent Historical Novel Works
Alexander Manshel calls the Recent Historical Novel “past perfect.” Readers aren’t looking to recall a simpler time, because survival in the past wasn’t simple. And there is little comfort in the last few decades, either. Just new problems. Instead of recreating the horrors, however, we’re linking, just like the past perfect aspects of grammar does for our writing, connecting the past to our lives today.
When a contemporary character thinks about Mount St. Helens, what does the massive explosion change in her life? As she comprehends the collective memory over time, is she more conscience of environmental issues? Especially when coupled with Fukushima? The character is trying to make sense of something that happened in the past, trying to see if society really changed. Did it? Is the world a better place after the recent events? If not, how does that change the character?
By using a recent event, you’re helping periodize the past, events as they were seen by you through your characters, much like Jane Austen and how she viewed her world.
Add the Recent Historical Novel to your work and see how deep the roots of your characters can grow!
Redefinition is definitely on-trend in our culture. Words, phrases, and concepts are generally fluid, and are often the tools of a non-democratic process which draws new lines between ‘good’ and ‘evil’ at will. Other linguistic changes seem to simply happen, usage and over-usage leading to their decommissioning and devaluation. One of those words is ‘kindness’, and the shift which has taken place here is symptomatic of wider changes in society and community. In this post I want to probe kindness a little bit, suggesting some subtle ways in which our expectations around this word have altered.
Random acts and covenant kindness
Over the past decade or so our culture and, to a certain extent, the church have bought in to the idea of kindness as an isolated act, as a randomised impulse that ought to be acted on. Paying forward a coffee, passing someone a handwritten note of appreciation, calling someone out of the blue and asking how they are, have become popular ways of showing affection and concern. For many people this is how kindness is now universally coined – it is a brief burst of benevolence, a momentary debit on our energies, resources, or social awkwardness, a credit to our sense of altruism and engagement.
Whatever benefit such acts accrue, they fall far short of how the Bible frames kindness, with its family likeness to ‘grace’. In Scriptural terms, kindness is inherently relational and essentially covenantal, it bespeaks commitment and a long-haul mindset. God’s kindness to Israel is not impetuous or capricious, his mercy is not mercurial or fitful – instead it is a grace grounded in his character, woven through history, and written in the blood of his Son. This kindness is so consistent that it is one of the grounds on which God will indict the unfaithfulness of his people – he has not conducted himself towards them in the way that a fertility god is expected to act. God has been fatherly, faithful, forgiving in the kindness of his heart towards those whom he loves. God’s kindness, then, is far from random, it is as regular as his character, and as unflinching as his purpose.
Good kindness is like God’s kindness All of this means that our kindness as Christians must be of a different stripe than how the world sees and spells compassion. When Jesus was challenged about neighbourliness he not only disarmed the malicious intent of his questioner by framing his story with a Samaritan as its hero, but by demonstrating the long-term nature of helping a neighbour: provision was made beyond the moment, follow up was guaranteed (Luke 10:25-37).
It is often said that in the online world non-Christians are more kind than Christians. Sadly this is often backed home by the merest sampling of what is being said and how it is being said. Christians are prone to vitriolic dispute, to crass caricatures, ad hominem attacks, and misrepresentation of the facts. Non-Christian Twitter, especially around shared interests, can seem like a kinder place by comparison, a place of affirmation and acceptance. The problem, however, is that this kindness is no more authentic than the anger and angst that social media elicits. It is easy to type nice things, it is easy to warmly congratulate, it costs us nothing to drop a compliment, and it may in fact gain us something in return. This kindness can bear more resemblance to Middle Eastern hospitality culture which looks warm and inviting to individualised Europeans when they encounter it, but which carries powerful expectations of entailment and reciprocation. A good test for this form of kindness is to ask if it represents a vested interest, and if it is extended to those who differ from the group consensus. The answer to the first question is normally affirmative, the answer to the latter is normally negative.
Christians are called to show good kindness, modelled on God’s kindness. It is a compassion which doesn’t pay things forward, which isn’t interested in personal outcome, which isn’t predicated on the worthiness of the recipient, or their intellectual affinity with us. It is a love which tramples boundaries, which upends expectations, which hands a tunic to a coat-demanding-enemy, which turns the cheek, which shows favour to the evil and the good, just like the sunshine of God’s love, and the rainfall of his care.
Chronic conditions demand chronic kindness One of the great strengths of this biblical kindness is its stamina in the face of hard things. Random acts of kindness might warm someone’s heart for a moment, but without follow up and consistency they amount to little more than a demand that an individual ‘go and be filled’ (James 2:16). Many of those most in need of our care are facing chronic issues and demands, troubles that won’t go away, or wounds which take a long time to heal. Insufficient kindness in those circumstances is mere salt to the wound, and ultimately brings grief to the recipient.
Costly, covenantal kindness makes the return visit, it engages in follow-up – it withstands rebuffs, it can listen to frustration, it knocks the door when the blinds are drawn, it carries the broken until they are whole, it nurses the dying until they are gone, it feeds the hungry until they can provide for themselves, it bears all things, believes all things, hopes all things, it never fails.
If our kindness is nothing other than an exaggeration of the culture’s empty sentiments, if it is a smile and a ‘Jesus loves you’ sticker, if it never costs, burns, or burdens then we are falling short of the grace we have been given, the grace we ought to show to a world awash with impotent compassion.