30 December 2006

Guided imagery

“The term ‘guided imagery’ refers to a number of different techniques, including visualization; direct suggestion using imagery, metaphor and storytelling; fantasy and game playing; dream interpretation; drawing; and active imagination.

Therapeutic guided imagery is believed to allow patients to enter a relaxed state and focus attention on images associated with issues they are confronting… Guided imagery is a meditative relaxation technique sometimes used with biofeedback.”

[from 'Natural Standard', an organization that produces scientifically based reviews of complementary and alternative medicine (CAM) topics.]

Imagery is the most fundamental language we have. Everything we do is processed in the mind through images.

For the past hundred years, many renowned Western psychologists have worked with imagery (dreams, daydreams and fantasies), some of whom have even postulated their own psychoanalytic techniques. Besides Wolfgang Kretschmer (whom I mentioned in my previous post), Robert Desoille’s ‘guided daydreams’, Jacob Morena’s ‘psychodrama’, and Hans Carl Leuner’s ‘experimentally introduced cathathymic imagery’ have all contributed to using imagery in therapy. Hans Carl Leuner had further developed psychodrama, calling it symboldrama psychotherapy or guided affective imagery.

However, according to Joe Utay, Assistant Professor, Counselor Education, Indiana University of Pennsylvania, Director of Counseling and Evaluation Services, Total Learning Centers, in a March 2006 article in Journal of Instructional Psychology, it is David Bresler’s and Martin Rossman’s work with guided imagery which is better known today. Bresler and Rossman co-founded the Academy for Guided Imagery and defined guided imagery as a “range of techniques from simple visualization and direct imagery-based suggestion through metaphor and storytelling.”

There are many others, of course, who have worked with guided imagery. In fact, there is no end to the amount of research that’s going on today in this field. In the same informative March 2006 article from Journal of Instructional Psychology, Professor Utay explains, “Guided imagery can be used to learn and rehearse skills, more effectively problem solve through visualizing possible outcomes of different alternatives, and increase creativity and imagination. It has also been shown to affect physiological processes… in addition to its use in counseling, guided imagery has also been used with very positive results in sports training, rehabilitative medicine, and healthcare.”

Although its applications are manifold, guided imagery is considered a part of alternative therapy/medicine and yet to be embraced by the mainstream medical fraternity. As the Natural Standard website says, “…research is early and is not definitive.”

To read the article on Guided Imagery by Professor Joe Utay and Megan Miller from the March 2006 issue of Journal of Instructional Psychology, click here.

To visit the Natural Standard website, click here.

To visit the Academy for Guided Imagery website, click here.

29 December 2006

Thinking in the form of a movie

Dreams, daydreams and fantasies are overlooked for all the good they do – or can do – for us. On the contrary, we scold people for frittering away time daydreaming or fantasising about things which are unreal and may not happen. In everyday life, dreams and daydreams have no substance. It’s foolish to spend time in such activities.

Yet, in older traditions and cultures, such as those of ancient Hindus, Chinese and Tibetans, or Native (American) Indians, dreams, daydreams and fantasies played an important part in forming imagery and imagination. They were essential in the formation of the concept of ‘I am’ and, therefore, essential to life.

Moreover, people believed that dreams, daydreams and fantasies could be used effectively for relaxation and healing, for solving problems, and for guiding the progress of their tribes. These people went to the extent of using hypnosis or hallucinogens (mind-altering drugs), such as opium, datura or peyote, to induce daydreams and fantasies… using symbols for meaning.

In the West, perhaps thanks to Descartes, imagination didn’t fit into the nature of rational thinking, and was considered ‘unreal’. It was with the advent of psychology in Europe and, particularly, Sigmund Freud’s declaration that dreams, daydreams and fantasies can help unravel a great deal about a person’s mind, that the use of dreams, daydreams and fantasies became popular as a psychoanalytical tool.

Besides Freud, many psychologists began working with dreams, daydreams and fantasies. Among them Carl G Jung, Roberto Assagioli, Carl Happich, Hans Carl Leuner, Wolfgang Kretschmer and Robert Desoille… who started using daydream and meditation in therapy, introducing the Eastern concept of ‘I am’ in their psychoanalysis.

It was Kretschmer who referred to daydreams as inner visions, or ‘thinking in the form of a movie’.

28 December 2006

Dream world

Is our dream world a new world we have created, as a writer or a filmmaker does with his/her narrative? Or, is it a pre-existing world which we have only just discovered? Is our dream world in the present, or in the future/past?

When we dream, our dream world seems real. While we dream, we have no cognition of our waking world – the actual reality, the phenomenal world where we physically exist – until we attain, or return to, the waking state and become conscious of our surroundings in the real world. Then, we become aware that we had been dreaming.

Yet, in the dream world, we are aware of ourselves and our surroundings. We are able to see, speak, smell, hear, move about, and do things as if everything was real. It’s as if we existed in a parallel world of our own. And, when we wake up, we are unable to enter this same dream world again. At least, willingly.

Only the experience remains.

26 December 2006

Doubles

There’s no doubt that we find it difficult to identify ourselves with the reality-and-fantasy sequences that many authors and filmmakers present to us in their narratives. Yet, psychologists say, we all live our lives in similar fashion.

Whenever we are unable to cope with reality, when the events in our lives are too tough to handle, we escape into fantasies, daydreams and wishful thinking. We continuously engage in a process of self-creation and self-discovery, constructing autobiographical narratives. We create fictional worlds in our minds where our problems are sorted out and resolved, providing a much-needed relief.

In these constructed narratives, we transform ourselves into fiction and become extensions of our individual selves, in the same way many authors and filmmakers present their characters in their narratives. We create alternate versions of ourselves, ‘doubles’ you might say, who play different roles in different events… although, in reality, our lives may never change. These ‘doubles’ have the ability to escape from the reality and have the freedom to change anything, living a life of unlimited possibilities.

We stretch the time at our disposal, living out not only one or two events from our lives, but, sometimes, our entire lives. We see our lives being lived differently… fulfilling our desires, resolving issues which are too horrible for us in our real everyday lives. And, should the reality we face everyday not change for us in accordance with our desires, we live and re-live these fantasies for years together.

Our real lives and our fantasy lives are interwoven, though we may never reveal this to the outside world. For, expressing such fictional worlds in print or on celluloid is unthinkable! No wonder, whenever we read about or see a display of such narrative, we feel a discomfort which we are unable to explain.

24 December 2006

Fragmented narrative, factual resemblance

The trouble with films like David Lynch’s ‘Mulholland Drive’ or Aparna Sen’s ‘15 Park Avenue’ is that, while the director is having so much fun with his/her film, the viewer feels left out… isolated. The continuous interplay of fantasy and reality is just too much for the viewer to keep track of… and results in the confusion I experienced and mentioned in my previous posts.

The viewer is unable to identify with what’s going on around or in front of him/her, as there’s a shift in the reality created by the director, as well as the characters, in the film. This confusion in the viewer’s mind turns into distaste and, later, into revulsion. Perhaps, that’s what director Aparna Sen tried to convey through the peripheral characters – the schizophrenic protagonist’s family and friends – in her film ‘15 Park Avenue’.

Maybe, the message in a fragmented narrative is stronger than what is intended by the director of a film or the author of a book using this technique: that, the fragmented narrative bears a much stronger factual resemblance to our lives than what we may, or are willing to, accept.

23 December 2006

15 Park Avenue: fragmented narrative

The fictional landscape in a fragmented narrative has been an experiment with post-modern writers and filmmakers for many years. The European school with writers like Umberto Eco and Italo Calvino, as well as filmmakers like Jacques Rivette of the French New Wave and, later, David Lynch from the US, are all masters of this narrative technique.

In the fragmented narrative, mastering the interplay between reality and dream, weaving the plot around the characters, is not an easy task and requires a special virtuosity. Since the presentation of the fictional landscape is a subjective one, often directly connected to the mind of the characters in the story/film, many of the stories/scripts deal with situations where the characters themselves create the fictional landscape. David Lynch’s ‘Mulholland Drive’ (see my previous post) is a perfect example of this where the plot shifts seamlessly between dream and reality.

In India, earlier this year, filmmaker Aparna Sen had used this technique in her film ‘15 Park Avenue’ – an exploration of schizophrenia in a young woman. I found the film quite insightful (and informative) as it dealt not only with a schizophrenic person, with the usual interplay of dream and reality happening all around her, but the film also presented the emotions and reactions of the onlookers – the schizophrenic person’s family and friends – which was another version of the reality. The question, of which version is the reality and which version is the dream, was masterfully handled by Aparna Sen… right till the end.

Of course, some confusion remains.

22 December 2006

Mulholland Drive: narrative in a subjective landscape

There have been moments in life when I’ve been confused, but very few would come close to my confusion during, and after, watching David Lynch’s film ‘Mulholland Drive’. That must have been sometime in 2002 and, honestly, I’m still trying to piece the film/story together… trying to make sense out of my confusion.

‘Mulholland Drive’ is about two beautiful women – Diane and Camilla – both actresses in Hollywood. Or, correctly speaking, the film is about Diane (played by Naomi Watts), trying to make it as an actress, but failing; and about Camilla (played by Laura Elena Harring), succeeding. The entire film seems to be a fantasy, a dream, played in Diane’s mind… which is where the confusion thrives.

Diane (as small-town girl Betty) comes to Hollywood with hopes of becoming an actress. While staying alone at her aunt’s apartment, Diane finds a glamorous but traumatised amnesiac woman, Rita (who is actually Camilla, but since Camilla can’t remember anything, she adopts the name Rita from a Rita Hayworth poster), hiding in the apartment. Diane (i.e. Betty) helps amnesiac Camilla (i.e. Rita) slowly discover her (i.e. Rita’s) true identity and, in the process, the two women become lovers.

Diane’s acting career fails miserably, while Camilla’s succeeds superlatively. Camilla becomes a glamorous celebrity, leaving Diane for another lesbian lover. Unable to take the pain of her failures (in career and in love) and overwhelmed by jealousy, Diane hires a hitman to kill Camilla. Then Diane commits suicide. Camilla escapes the attempt on her life, but the incident turns her into a traumatised amnesiac. Camilla wanders aimlessly for a while before taking refuge in a Hollywood apartment, where she is found by Diane (i.e. Betty).

Got all that? There’s more, of course, but for simplicity, I won’t go into it.

What’s fascinating about ‘Mulholland Drive’ is that director David Lynch has been able to take the normal path of narrative, with its objective reality, and turn it upside down into a subjective landscape. The landscape in the film is a fantasy in Diane’s mind. That is, it’s a view of one of the characters in the film – and that too, it’s a fantasy in a troubled mind. A mind which itself is trying to escape from the reality it cannot cope with. It’s fiction (e.g. Betty being dreamed up by Diane) within fiction (e.g. Diane struggling with her failed acting career and resorting to fantasy), creating a totally subjective landscape.

In ‘Mulholland Drive’, David Lynch presents reality in fragments of fantasies that his characters dream up while trying to cope with the reality of their lives. The film viewer has trouble identifying with this. If the characters in the film are unable to differentiate between fantasy and reality, how can the film viewer?

21 December 2006

Hybrids of truth and fiction

Telling the truth about the self, constituting the self as complete subject… it is a fantasy. In spite of the fact that autobiography is impossible, this in no way prevents it from existing.
– Philippe Lejeune, ‘On Autobiography’

We look for meaning in life – to explain life to ourselves. For some of us, that’s not enough. We need to share our experiences, our interpretations of life, with others. We need to speak of our joys, our sorrows and the lessons we’ve learnt, warning others of the pitfalls, passing on personal philosophies as wisdom.

But, are these accurate reflections of our lives? Or, are they pseudo-realities constructed by us to suit our notions of our identities and personalities – i.e. the identities and personalities we wish to present/project to others? And in doing so, do we not create fictional landscapes within which we exist… and, perhaps, even seduce others to join us there?

Autobiographies, memoirs, personal stories and even interviews are such fictional landscapes, simulations of our lives, where truth and fiction co-exist naturally… happily. For, when these landscapes are crafted well, it is impossible to distinguish truth from fiction, fiction from truth.

In such situations, autobiographies, memoirs, personal stories and interviews, all become hybrids of truth and fiction. In them, we are no longer ourselves, but appear as characters like any other.

Of course, the authorship of the narrative still remains with us.

19 December 2006

What are we without our stories?

“It might be said that each of us constructs and lives a ‘narrative’, and that this narrative is us, our identities.”
– Oliver Sacks, neurologist, author

Alfred Hitchcock is not only famous for his thrillers, the suspense and murder-mystery tales which are almost a film genre by themselves, but also for making mysterious appearances in his films. Clive Cussler, one of my favourite authors of adventure novels and creator of the hero Dirk Pitt, has an old-man character called ‘Clive Cussler’ making unexpected appearances in Clive Cussler’s (the author’s) novels.

Apart from the comic relief this brings in, I find this act of being a part of one’s own creation a fascinating subject. I mean, this is no autobiography, nor a self-portrait by a painter. A filmmaker and a writer are surreptitiously including themselves as characters in their fictional constructions. Are they doing this for fun? Are they trying to construct new identities for themselves? Are they trying to tell us that their narratives are not all fiction?

This had me thinking in a bit of (self-)inventive mood. Are we not all stories by ourselves? I mean, when we write our resumés for a job application or introduce ourselves at a dinner party or write a short profile on Blogger, are we not constructing stories of ourselves to create an impact or produce a desired result… much the same way a filmmaker or a writer would do during their creative process? And if this be true, are we not all individual stories of some kind?

13 December 2006

On ‘A Writer’s Diary’

“At the best and even unexpurgated, diaries give a distorted or one-sided portrait of the writer, because, as Virginia Woolf herself remarks somewhere in these diaries, one gets into the habit of recording one particular kind of mood – irritation or misery, say – and of not writing one’s diary when one is feeling the opposite. The portrait is therefore from the start unbalanced, and, if someone then deliberately removes another characteristic, it may well become a mere caricature.”

(Leonard Woolf, in the preface of his wife Virginia Woolf’s ‘A Writer’s Diary’ which was published 12 years after her death. ‘A Writer’s Diary’ contains edited excerpts from Virginia Woolf’s diary manuscripts.)

12 December 2006

Double roles

Why would we want to write our autobiographies? Isn’t there enough reading material in this world, both fiction and non-fiction, to entertain readers? And, how do we know that our autobiographies will entertain others? What do we really know about the reading habits of billions of readers that inherit this Earth? Why would any of them ever be interested in our life stories?

If it isn’t to entertain others, then would it be simply to tell our tales? Would it be to chronicle our lives, our experiences, and our learning in order to assert ourselves on this planet as individuals with distinct egos which need to be fuelled? If this be true, then isn’t writing an autobiography nothing more than a matter of conceit?

Or, is it to discover ourselves, to give social context to our individual experiences, to understand our relationship with the world around us in a self-investigative mode? In which case, isn’t writing an autobiography nothing more than empowering our pens – or our keyboards – into asking questions about ourselves, about our lives, which we are too afraid to ask aloud in the real world we inhabit?

If this be true, then are we not re-making ourselves as bolder, more assertive characters in our autobiographies than we really are? And, in the process, are we not creating fictional characters and telling their tales which are truthfully not ours?

As we write our autobiographies, are we not playing double roles?

11 December 2006

How much of an autobiography should we believe?

Told in the first person, an autobiography is supposed to be a testimonial of the person writing the autobiography. The first-person voice makes the storytelling more compelling, more believable, more real. The autobiographer, as the storyteller, sucks us, the reader, into the story, making us believe that the reality of the autobiographer is the real world. Almost as if forcing us to recognise and accept the autobiographer’s view of the world as our own reality.

This technique is used by fiction writers too. As readers of fiction, we often escape into the fiction-writer’s world, seamlessly, believing the storytelling, the setting, the characters, etc. to be true… at least, for that moment. It’s like experiencing a reverie. However, we come out of this reverie, if not immediately upon closing the book, at least sometime soon afterward. We realise this is not the real world, but a fictional tale told by a person providing us a few hours’, or a few days’, entertainment.

Of course, we go through our emotions, feeling happy or sad or angry or despair, in agreement with, or in response to, the writer’s treatment of the story and the experiences of the characters in the story. Later, we applaud or challenge or criticise the fiction-writer’s work, reviewing his/her skills as a storyteller, either in absolute terms or in comparison with other works of fiction. But, all through the experience of reading and discussing the work, never do we forget that it is a work of fiction. Anything can happen here. Reality can take its own shape.

But, an autobiography must speak the truth. People, places, dates, events, sequence of events, conversations cannot change to make the storytelling more entertaining, more compelling. The autobiographer is bound by these elements. However, the autobiographer may play with the style of presenting these elements and these facts, and add to them his own inner experiences and emotions as flavours. This basically means, the autobiographer cannot make up or fictionalise the narrative according to his/her whims.

Autobiographers rarely ever write their narratives on the spot. They write later, remembering, introspecting, relying on their memories. This is a tricky affair as memories are known to fail. Of course, autobiographers consult various notes, journals and documents before actually constructing their stories, but can these documents be 100% reliable? The emotions experienced instantly, the nuances of the moment are likely to be missing.

Moreover, autobiographers may be, like all human beings are, prone to talking too much about themselves, exaggerating their life stories, self-justifying their actions, presenting their opinions as facts… turning their autobiographies into works of fiction. If the only reliable source of facts in an autobiography is the autobiographer himself/herself, how can the reader, not having first-hand knowledge, verify all the facts of an autobiography? If all this be true, how much of an autobiography should we really believe?

08 December 2006

A grief observed

In ‘A Grief Observed’, C S Lewis wrote about his bereavement when his wife, Joy, died from bone cancer. Yet, he wrote the book, an autobiography, under a pseudonym, N W Clerk, hiding his authorship from the public and referring to his wife as ‘H’ (her first name was Helen). Perhaps, he wanted to keep his grief private and, yet, use his writing as a therapeutic tool to come to terms with it.

We may never know the truth behind his wish to remain anonymous when ‘A Grief Observed’ was first published. However, what we do know is that Lewis later decided to make his authorship public, apparently, upon receiving advice from friends. And, hence, we now have with us a wonderful autobiography… a look inside a man and his loss.

This thought, of this act of writing a personal narrative of one’s grief, while toying with the idea of remaining anonymous, made me wonder if Lewis’ ‘A Grief Observed’ is indeed an autobiography. I mean, if it is a 100% autobiography in first person… and not a semi-fictionalised account of C S Lewis’ life and his bereavement.

Mind you, I’m not accusing C S Lewis of deceit; nor am I suggest anything derogatory. I’m merely wondering if Lewis wrote down exactly what he felt about his grief… as it ought to be in an autobiography. Or, did he come out of himself and, as if he were an observer observing someone else’ grief, write down what he thought C S Lewis would have felt at that moment?

Would C S Lewis have sacrificed some of his real feelings in order to write a book? And, if that were so, wouldn’t ‘A Grief Observed’ contain some fiction?

05 December 2006

A late love affair for Lewis

There is, hidden or flaunted, a sword between the sexes till an entire marriage reconciles them. It is arrogance in us to call frankness, fairness, and chivalry ‘masculine’ when we see them in a woman; it is arrogance in them, to describe a man’s sensitiveness or tact or tenderness as ‘feminine’. But also what poor, warped fragments of humanity most mere men and mere women must be to make the implications of that arrogance plausible. Marriage heals this. Jointly the two become fully human. “In the image of God created He them.” Thus, by a paradox, this carnival of sexuality leads us out beyond our sexes.
- C S Lewis, ‘A Grief Observed’

C S Lewis married Joy Davidman: first a civil marriage in a registry office (1955), and a year later, formally, by a clergyman at her bedside. Joy was suffering from bone cancer and was bedridden. Lewis was 57 at the time (a confirmed bachelor until then) and Joy 40. Apparently, C S Lewis and Joy Davidman Gresham had known each other since 1952 when she had visited him in Oxford. Joy, an American writer, separated from her husband (William Gresham) and with two sons, moved to England and later divorced her husband to marry Lewis.

Joy died in 1960. Lewis died in 1963. In ‘A Grief Observed’, C S Lewis records his experience of bereavement at his wife’s death. The book was published in 1961.

04 December 2006

C S Lewis and materialism

In a materialistic world, does God exist?

“Modern society continues to operate largely on the materialistic premises of such thinkers as Charles Darwin, Karl Marx, and Sigmund Freud,” wrote John G Guest Jr in an article tilted ‘C S Lewis and Materialism’ which appeared in the Nov-Dec 1996 issue of Religion and Liberty from Acton Institute. Prof Guest went on to say, “Yet few today feel at home in the materialist universe where God does not exist, where ideas do not matter, and where every human behavior is reduced to non-rational causes.”

I cannot agree with Prof Guest’s statements simply because I do not believe that a materialistic world and God are mutually exclusive concepts. If materialism is a way of understanding reality, then God certainly features there somewhere… along with thoughts and emotions of people. For, if our world is viewed as a process – where change and transformation are critical to its being – God, religion and spirituality has as much a place in it as nature, science, history and politics.

What’s good about Prof Guest’s article is his presentation of British author C S Lewis’ perspective on materialism. C S Lewis, popularly known for his children’s tales ‘Chronicles of Narnia’ (recently made more popular by Hollywood as a film grossing close to US$1 billion), was an atheist from his adolescence until he turned to religion (theism) at age 31, and to Christianity a couple of years later. Upon his conversion, Lewis wrote avidly, even becoming a popular broadcaster on BBC, submitting his views in favour of Christianity, and refuting many of the assumptions and views against Christianity prevalent at that time (i.e. mid-19th century to mid-20th century).

According to Prof Guest, Lewis debunked materialism on issues such as reason and truth, morality, personal responsibility, and utopianism. He mentions that C S Lewis tried to put together a new natural philosophy that understood human beings as they were… with thoughts and emotions… and not explain them away as tiny parts that add up together to make up a whole human being. C S Lewis’ God is central to this new natural philosophy, explaining away rationality and materialism like things from our past. Only God exists. Don’t take my word for it, read Prof John G Guest Jr’s article here.

02 December 2006

Worldview: Sigmund Freud & C S Lewis

All of us, whether we realize it or not, have a worldview; we have a philosophy of life our attempt to make sense out of our existence. It contains our answers to the fundamental questions concerning the meaning of our lives, questions that we struggle with at some level all of our lives, and that we often think about only when we wake up at three o'clock in the morning. The rest of the time when we are alone we have the radio or the television on anything to avoid being alone with ourselves.

Pascal maintained the sole reason for our unhappiness is that we are unable to sit alone in our room. He claimed we do not like to confront the reality of our lives; the human condition is so basically unhappy that we do everything to keep distracted from thinking about it.

The broad interest and enduring influence of the works of Freud and Lewis result less from their unique literary style than from the universal appeal of the questions they addressed; questions that remain extraordinarily relevant to our personal lives and to our contemporary social and moral crises.

From diametrically opposed views, they talked about issues such as, “Is there meaning and purpose to existence?” Freud would say, “Absolutely not! We cannot even, from our scientific point of view, address the question of whether or not there is meaning to life.” But he would declare that if you observe human behavior, you would notice the main purpose of life seems to be to find happiness to find pleasure. Thus Freud devised the ‘pleasure principle’ as one of the main features of our existence.

Lewis, on the other hand, said meaning and purpose are found in understanding why we are here in terms of the Creator who made us. Our primary purpose is to establish a relationship with that Creator.


[Dr Armand Nicholi, speech at a faculty/alumni luncheon hosted by Dallas Christian Leadership at Southern Methodist University on September 23, 1997]

01 December 2006

Just a dream

Maybe there is no creation, no evolution. Maybe what we see is just a dream. A dream dreamt by us… or more precisely, dreamt by the ‘I’ – the ego – in us.

Because the ego wants to believe that there is existence of a world. Because the existence of a world is an affirmation of the ego itself. The ego’s raison d’etre.

Maybe, because the dream is in our minds, it has no substance. And, if it has no substance, it cannot be real. And, if it is not real, then it has never been created… nor evolved.

Maybe this is the mystery of life.

But, even if this were true, would we believe it? Would we accept it?

29 November 2006

A single truth

In the Christian world, the debate over evolution versus creation is an interesting phenomenon. It seems to be a conflict over what is correct: science or religion; scientific proof or human faith. I mention ‘the Christian world’ here because the rest of the world, with its various other religions, is quite unperturbed by it. Perhaps, such a debate does not matter to them. And I wonder why. Why would it not matter to us where our beginnings lie?

I’m not sure if I have the answer to this question. But, I do know that, once in a while we all wonder who we are; where our ancestors and their ancestors came from. We do indeed wonder where our beginnings lie. We wonder how our universe materialised and how humans came into being. And I know that, on most occasions, we come away from our thoughts dissatisfied, unable to arrive at, or find, or discover, a suitable answer. Maybe that’s because we are always searching for one correct answer. We are seeking a single truth that would solve the puzzle of life instantly… and eternally.

But, why does it have to be one truth and not the other… or not another? Why should one truth override another? Why should a question have a single answer? Or, for that matter, why should many questions have one single answer?

When I had met Charles Handy, the famous British Management guru, he had narrated a story which was also mentioned in his book, ‘The Age of Unreason’: When I was a child and I solved a Maths problem, or answered a question from a History lesson, I checked the back of the book for a correct answer. If my answer matched the answer listed in the back of the book, I was right. It was that simple. It was only when I grew up that I realised that, in life, there isn’t always one correct answer. In fact, every problem has multiple – and sometimes distinctly varied – answers. [This is an approximation of our dialogue; I don’t remember the conversation exactly.]

At work, as a strategic marketing consultant, I am faced with this situation constantly. Every marketing problem has alternative solutions – several strategies that could solve the marketing problem equally well. Neither I nor my clients are one hundred per cent certain that one clear-cut strategy is the only correct solution – the absolutely correct answer – to a specific marketing problem. So, we go with what seems to be the best solution at that specific moment. And, this is something that marketers, businessmen and strategists battle with everyday of their lives. Whether they admit this to the world at large or not, this is a fact of life.

Perhaps, therein lies the truth we are seeking: that there is no single truth in life, but several alternatives to choose from. Maybe, in the Christian world, evolution and creation are simply two alternatives to life’s big question.

27 November 2006

What’s all the fuss about?

When I see, or hear of, people fighting over the creation of the world and the evolution of the human specie, I stand back and wonder what’s going on. I mean, let’s be honest here. None of us were around when these things happened. Neither do we have absolute proof of what had happened exactly.

Sure there’s evidence emerging here and there, now and then, adding to the lot which already exists in books of religion and science and history. And there are people of religion and science and history and philosophy – and even individuals like you and me – all trying to piece things together… making up their minds, updating their versions… to make sense of it all.

But, who’s to say which version is right? As I said, none of us were around when these things happened. For all we know, none of the versions we believe in today is even close to what the truth might be. In fact, our search for the truth may be on, but we may never know what really happened? Couldn’t that be God’s Will too?

Why don’t we just stop fighting over things we aren’t really sure of and concentrate on what we do know?

What we do know is this: We are here on Earth. So, why don’t we just focus our attention on this fact and put our energies into finding ways of living together in peace and harmony… showing compassion and respect for all of God’s creation?

If there is God, I’m sure He would want us to do that… regardless of our gender or age or race or colour of skin or religion or the language we speak or the food we eat or the clothes we wear or our place of residence. After all, we are all equal in His eyes.

Really, I don’t know what all the fuss is about?

23 November 2006

God came much later

With the continuing debate over Intelligent Design, and what a proper Christian should teach his children, human evolution is still a mystery to us. Not just from a historical or scientific point of view, but from the moral view of educating our children with the truth about man’s origins on Earth. What happened when; and what happened after.

If you’ve been reading my posts in the last couple of weeks, by now, you would have picked up a layman’s idea of how early man lived his life. Perhaps you already know much more than what I’ve written about. But, Christian or not, you cannot deny the fact that man evolved. That what man is today – with his mobilephone and microwave, computer and camera, automobile and airplane, science and surgery, art, music and literature – was not how he used to be. That over the years, by applying his mind, man has progressed from an animal without clothes to create and define the cultures we see before us.

Early man had his tools, weapons, artefacts, pottery, baskets to store things in, mud-brick houses to live in, and food to eat. He had his earth, water, fire and the wheel. He probably also had a concept of air; he certainly knew about wind and storms. Physical things mattered to him. You might say, he lived in a materialistic world. God did not exist then. Sure, social and moral guidelines were practised by members of communities. And, there was procreation, of course – life and death. Even caring, compassion and love within human beings.

The concept of God came much later; originating in fear and insecurity, not in compassion and love as we teach our children today.

20 November 2006

Come together

“One thing I can tell you is you got to be free
Come together right now over me.”
[‘Come Together’ – The Beatles, Abbey Road, 1969]

People come together in the larger interests of the community they live in. You may not believe this, going by the events around you, but it’s true. It’s a human trait, a tendency, to come together; and this trait has been with us even before civilisation. I say even before civilisation because the ‘coming together’ of people happened when the first human communities were formed – much before the ancient civilisations (as we recognise them today) were established.

This ‘coming together’ to form one homogeneous social and economic group is the first sign of human organisation. It’s most likely to have happened during the early Neolithic settlements, when man evolved to a pastoral life (herding animals and breeding them as a source of food) and practised agriculture (growing crops) on flat land near rivers. These settlements required to be managed too, and the first government, or the ‘coming together’ to form a political organisation, was probably established around this time.

Still, managing the environment was a challenge. Predators preyed, the land was not fertile enough, a lack of water or too much of it destroyed food and lives, climatic changes were misunderstood and misinterpreted, or sudden geological events created terror. Life was uncertain. Man began to hope – and pray – for an ideal world for himself. A world where no fear or distress existed; and even if these did exist, there was a way to find courage to overcome them.

And so, man created religion – a ‘coming together’ physically, socially and spiritually to free oneself from uncertainty, fear and distress.

17 November 2006

Foundations

Not much is clearly known about man’s evolution. Fragments of man’s early life are discovered almost every day, and archaeologists and historians, while putting the pieces together, are forced to apply their imagination to draw conclusions.

One of the important questions that British archaeologist Steven Mithen tries to answer in his book, ‘After The Ice: A Global Human History 20,000-5000 BC’ (discussed in ‘Stepping Out’, my 4 June 2005 post), is whether man evolved equally progressively all over the world. So far, it seems, he did not. Evidence from archaeological digs suggests that man had evolved in different paces in different places.

Not only that, even when man had evolved enough to build settlements which grew into civilisations – i.e. urban, planned, flourishing, well-governed settlements – around 3500 BC, these ancient civilisations appeared only in a few places across the globe. And, they came to an end just as mysteriously within a period of 3,000 years. Written records found in these civilisations are still being deciphered. Hence, archaeologists and historians are, once again, left with their imagination.

As much as Gordon Childe’s definition of civilisation encapsulated human achievements in terms of discoveries like the plough, the wheel, irrigation, writing, a system of measurements, the sailing ship, etc. (see my previous post), other archaeologists and historians have proposed that human organisation must have complemented human achievements in order to make a civilisation stand on its feet.

Human organisation, they said, included a centralised government (along with territorial/state management); social stratification or a class system (an administrative class, a privileged class in control of production, a priestly class, a working class, and producers); an economy (more precisely, the use of money in trade and transactions); a division of labour (according to skills); and a military for defence. Some have even suggested a tax system and population sizes (a minimum of 5,000 people staying together).

These were, then, the foundations of our civilisation. It was only later that academics thought of including culture (art, literature, music, cuisines, clothing, language, etc.) and a system of shared values in defining our ancient world.

13 November 2006

An exceptionally long revolution

Revolutions are not supposed to last for years and years. They are deemed as great changes in a short span of time. So, when V Gordon Childe, one of the world’s greatest archaeologists, suggested a ‘Neolithic Revolution’ when early man moved from the ‘striking’ method of making and using tools to the ‘grounding’ (hand-rotating) method, he picked up a lot of criticism. After all, this change in method in making/using tools took anything upwards of 3,000 years. So what’s revolutionary about that?

Gordon Childe, however, did not limit his use of the term ‘Neolithic Revolution’ only to the method of making/using tools. He believed, this period in man’s evolution also ushered in agriculture (including irrigation), the formation of village-like settlements, domestication of animals, eating habits that included food made from grains, construction (evenly-measured mud-brick houses), the discovery of the wheel, the appearance of gods or deities, superstition, and methods of burial that suggested the concept of an after-life.

Moreover, he believed, in the context of the pace of change that we had witnessed earlier – when human evolution was taking hundreds of thousands of years or tens of thousands of years – a mere 3,000 years did seem like a short span of time.

In the Neolithic period, pottery appeared. First as items made from clay, hand-moulded to give rough shapes. These were primitive-looking pieces that lacked finesse by modern standards. Then, another discovery, the potter’s wheel, ushered in the concept of evenly-shaped mass-produced pottery. Not long after that, pottery was made from firing/baking the clay items, hardening the clay to make pottery last longer. Archaeological findings suggest that the early moulds for pottery were woven like baskets and layered with asphalt or bitumen.

But man’s evolution didn’t stop there. Man ‘discovered’ metal. That meant the discovery of minerals and ores from rocks and the earth’s soil, and better still, the extraction of metal from these ores through a process called smelting. This brought in the Bronze Age (bronze being an alloy of copper and tin), introducing copper and bronze artefacts… adding to (and replacing some of) early man’s collection of stone and bone tools and artefacts.

According to Gordon Childe, several elements were essential for a civilisation to exist. He identified them as the plough, irrigation, domesticated animals, specialised craftsmen, the wheeled cart, the smelting of copper and bronze, sailing ships, a solar calendar, writing, standards of measurement, urban centres, and a surplus of food necessary to support non-agricultural workers and others who lived in the settlements/communities (villages and urban centres).

11 November 2006

A material life

Somewhere along the line, it’s not clearly known when, man began to make tools of a more sophisticated type. These tools were made not by striking one stone against another – the older method which gave rough edges – but by rubbing and grinding one stone against another using a hand-rotating movement. This was a remarkable upgrade in technology of that time, and early man began to make tools with smoother surfaces and sharper edges.

Apart from developing hunting and fighting weapons of better quality, this technology ushered in a whole range of tools for grinding grains, de-husking seeds, axes for cutting down trees, primitive hoes for digging soil, and even utensils such as bowls.

It is suspected that around this time, cultivation was ‘discovered’… most probably by women who, when the men were out hunting, went beyond their duty of collecting fruits and roots and seeds to actually planting seeds and roots in the ground and nurturing them. This increased their supply of food near their settlements and made life more convenient. And so, introduced agriculture in human civilisation.

With agriculture came domestication of animals, with cattle, goats and sheep providing milk and meat. As much as barley and wheat and several other kinds of grains were cultivated, animals were bred in captive stocks. The buffalo, which was a chief source of food, had not yet been tamed and, therefore, hunting was still a preoccupation with the men. But, the dependence on hunting was definitely reduced by then.

Weaving of baskets (from reeds and grass) had begun around this time, supplementing utensils made from mud and clay. Clothes were still made from animal hides, but perhaps some were made from weaving hair taken from goats and sheep. Bones were used as tools and, along with conch shells and beads made from soft stones, were also used as ornaments worn by men and women. It became a material life, with individuals identified by their possessions.

As life settled, villages grew in numbers and elementary houses of clay and mud-bricks were constructed. There is evidence that these houses were used both for living and for storage of grains. For, by this time, the villages and communities had learnt to grow more food than needed for their bare subsistence, thereby producing a surplus (see my earlier posts). It is likely that this surplus was appropriated, most probably by force, by some non-producing people as their right.

With agriculture, ownership of domesticated animals, production and appropriation of surplus, construction of houses and development of skills in crafts, a sense of social differentiation or classification emerged. The concept of private property became important. No longer was division of labour and social differentiation in the community a matter of gender – i.e. men were hunters, women were gatherers. Villages and communities fell into a structured social order. They needed to be managed – and governed.

09 November 2006

The first art galleries

Life was hard in those Paleolithic days. The average life span of prehistoric man is estimated (by today’s scientific measures) to have been less than 30 years. Hunting was a preoccupation, with animals providing our ancestors their main meals. Yet, there was something sacred about this, and early cave art tells us that our ancestors considered relationships between humans and animals to be important.

Cave art – i.e. paintings by prehistoric man found on the walls and ceilings of caves – came into public attention when the first cave paintings were found, accidentally by a little girl, in Altamira, northern Spain in 1879. Since then, thousands of similar paintings have been discovered in caves across the world, the earliest dating back some 27,000 years ago. The interesting thing is, no matter where on Earth cave art is discovered, the similarities between them, in terms of subject matter and style, are amazing.

The paintings look like line drawings of animals and humans in stick-like forms, mostly depicting hunting scenes. A few show women and children, such as the one of a woman carrying a load on her head, dated 8,000 years ago, found in the Bhimbetka rock shelters in Madhya Pradesh, India. It seems charcoal was the main ingredient in the black paint used in many of these cave paintings.

By today’s standards of fine arts, prehistoric cave art is childish, even clumsy. Yet, these collections of cave paintings represent what would be the first art galleries in human history. Even today, they are an enigma to historians, archaeologists, anthropologists and sociologists. In some caves, according to dating measures, paintings were discontinued and resumed thousands of years later. Why, no-one knows for sure.

Records suggest that cave art continued for 20,000 years or so, and came to an end when the hunters’ precarious way of life disappeared. This happened, slowly, as early man began to rely more on grains and vegetables for his food, and moved to plain open spaces which allowed cultivation. By this time, of course, human populations had increased, making it difficult for larger groups of people to stay in or near caves.

07 November 2006

Superior

It’s not just animals that early humans – our ancestors – slaughtered, making many of them extinct. They killed rival human species too. There is a point of view that several weaker human species perished in the hands of more robust superior humans, becoming totally extinct. Some, of course, fled to distant regions, living isolated lives, limiting or slowing down their evolutionary growth.

The victors evolved progressively, moving from living in natural sites such as rock shelters and caves for temporary refuge to forests where they learnt to build primitive huts from branches of trees and settle down in tribes/communities. Animals found in the forests in their wild state provided basic food – both milk and meat. Some species of birds provided eggs and meat as well. Those living near rivers or the sea learnt to catch and eat fish and turtles. Most food was eaten raw, but, with the discovery of fire some 20,000 years ago (I’m not sure about this date), roasting of meat became common practice.

Social and economic management came into practice as well, with division of labour and barter systems enabling smooth functioning of the overall tribe/community. Different people were allotted and participated in different tasks, ensuring that the entire tribe/community benefited from their collective effort. For instance, while the men hunted animals for food, the women gathered fruits and grains near their settlements. At this time, an important economic discovery was made: early man learnt to store goods for future use, thereby creating the concept and the value of ‘surplus’.

Technology, apart from fire, was still limited to stone tools and the improvement of this technology – the creation and use of better and better stone tools – is really what defined human evolution. The stone tool was no longer just a weapon for protection and hunting, but began to be used for cutting, chopping, axing, cleaving, carving, boring, drilling, pounding, grinding… the applications were numerous. Many more tasks could be performed with these new stone tools, and along with tools made from animal/human bones, our ancestors were not only superior, but virtually unconquerable.

05 November 2006

Lord of the beasts

[No, it’s not the lion.]

Although we like to fantasise about dinosaurs and humans in a fight to death, dinosaurs were extinct long before humans were found on Earth. According to the established and accepted systems of dating the Earth – in Geological Ages – dinosaurs lived in the Mesozoic Era, somewhere between 248 million years and 65 million years ago (give or take 5 million years). The first humans were found only 2 million years ago or thereabouts.

It is acceptable today to describe or define the first humans as those who, having emerged from the ape species (the hominids), could naturally walk erect on its two legs (a biped). Evidence from fossils, dating between 2.6 million and 1.7 million years ago, points to Africa – particularly East and South Africa – as the earliest source of human life form, called the homo habilis.

The homo habilis – and a younger contemporary, the homo erectus – were really something to talk about. They had a much larger brain than other hominids of the time, and expectedly, greater intelligence. Evidence suggests that they may have lived collectively in groups, uttered gargling sounds which could be believed to be the first ‘words’, and could make stone tools by striking one stone against another, breaking away flakes to get a cutting edge.

This cutting-edge technology allowed the early humans to make weapons, which were used for protection and to kill other animals. Kill they did in such large numbers that many animals became extinct or reduced drastically in numbers. Lions, tigers, leopards… fell in this category. Some fled and survived in the wild as humans began to form colonies and settle down in specific locations.

As lord of the beasts, man promoted only those animals – such as horses, cattle, pigs, sheep, goats, poultry, dogs, cats, and even elephants – which he could domesticate or bring within his control by other means. Others had to take their own chances. In a fight to death, early man ruled over all animals… and lived to tell the tale.

04 November 2006

An organised life

In a settlement, many of the surprises common to a nomadic life of a hunter-gatherer – such as travelling to unknown destinations, discovering new terrains, foraging for food, protecting oneself and the tribe from predators and other adversities – were replaced by routine activities in an all-too-familiar territory. Uncertainties were rooted out and life was organised.

Of course, a settled life didn’t mean peace and harmony. New problems surfaced: how to manage resources on a sustainable basis; how to till the land so as to get more yield year after year; how to store food and other commodities without spoiling them; how to harness the power of the seasons; how to deal with conflict arising from people living in close proximity.

Slowly a sense of self emerged in the early human settler; along with it, a consciousness of differential access to resources, skills and knowledge. And thus, to social status. Personal possessions became important. People used differences in their material possessions, as well as their knowledge and skills, to express their economic and social differences from others in the settlement.

Tensions grew, posing problems to the organised life that had been created.

03 November 2006

A lazy man's approach

In the halls of history, anthropology and archaeology, there’s a debate raging over similarities between recent societies and past civilisations, taking discussions all the way back to the evolution of human beings. Who are we? Where do we come from? Are there links between colonies of cavemen and present societies?

It seems, finding answers to questions on whether we, today, bear any resemblance to our ancestors living thousands of years ago is turning out to be more difficult than expected. Especially from an archaeological point of view, since archaeology has to provide hard evidence to justify every theory.

Much to the disappointment of archaeologists, many colonies belonging to our Stone Age (or even earlier) ancestors may have developed into metropolitan cities of today, thereby denying them opportunities for archaeological digs and discoveries. And so, thanks to progress and development, a great deal of valuable evidence of our past may be lost forever.

There are theories on how colonies were formed, growing from a simple gathering of nomadic hunter-gatherers to groups of human dwellings to full settlements, which bear resemblance to immigrant movements and lifestyles of today. In one theory, based on analysis of bone chemistry of skeletons excavated from archaeological sites, archaeologists have found that many of our ancestors may have been lazy – practising a sedentary lifestyle.

From dietary patterns, research has found that early humans had given up their foraging lifestyle, trekking across vast landscapes as hunter-gatherers, to settle down to a quieter life in areas which were rich in resources. After all, the trek and the hunt were in search of ‘resources’, fairly similar to immigrant populations of today in search of 'resources' leading to a better life.

With these early humans, soon settlement patterns became fixed; specialisations over local resources developed into common tasks and professions; daily routine became lifestyle. As long as the settlement did not run out of resources, there was no need to move. Over thousands of years, settlements such as these slowly developed into cities.

It’s interesting to note how a lazy man’s approach to life may have actually created great civilisations of today.

01 November 2006

Creatures of the past

[No, this is not about pre-historic animals.]

In every community there exist connections between people. Between one individual and another; between groups of individuals; and between one individual and the community as a whole. These connections constitute a community’s social capital. This social capital, like its culture and identity, is something people simply have. It’s a leftover from their long dwelling together as a group… a sort of continuity with their past.

The nature of some of these connections gets modified over the years; but, all in all, they remain, collectively as social capital, an important component of the community and its prosperity. In fact, the potential for this social capital actually spreads beyond the community – i.e. the immediate group of people who constitute the community – and influences others outside it. Depending on their social capital, some communities welcome others, embracing new cultures and new thinking. Some exude negative vibes and actually repel outsiders.

Racism, genocide, caste-based behaviour and communal feelings are examples of such negative social capital. Or, correctly, negative externalisation of such social capital. Sometimes it’s difficult to separate one from the other. Internally, the social capital of a community can act as a cohesive or binding force, enhancing the functioning of the community; but externally, it can treat people with suspicion, hostility and hatred… excluding others as outsiders. Some of these negative feelings have a history behind them, embedded as they have been in the minds of people for generations, and are difficult to change or erase.

Hopefully, with globalisation, the social capital of communities and countries will improve, creating a better place to live for all of us. But, then again, who knows? We are, after all, creatures of the past.

30 October 2006

Elementary societies

Social capital in the form of cooperation and trust is an integral part of all societies and economies. Some of them may even thrive on the notion that cooperation and trust are its basic foundation. This notion is typical of small communities like tribes and villages rather than modern metropolitan cities; and, I guess, is more a characteristic of the underdeveloped or developing economies than the prosperous ones.

This phenomenon can be seen today in the adivasis (ancient tribes of India) who still live by themselves in self-sustaining self-reliant communities, governed by the policies of their own tribe/community. Everyone knows everyone else in the tribe/community, and everything is done to contribute to the general well-being of the tribe/community. Within the adivasis, the notion of selfishness is almost non-existent.

It’s an elementary form of society/economy, carried forward from pre-historic times when there were no formal policies or laws or institutions guiding tribes or people. Cooperation and trust were the only means of survival and prosperity for these tribes/communities.

Although informal in nature, the norms of social interaction, joint effort and governance adopted by these tribes/communities were quite effective, and governed these tribes/communities efficiently. The tribes/communities prospered to become great civilisations and, today, form the societies and economies of which we are a part.

However, over hundreds of thousands of years, societies/economies have become more complex. Goods which were once freely available have become scarce, and competition has become a driving force behind economies and human behaviour. Nations have been formed, politically demarcating geographies, human populations, cultures and ideologies. Laws and institutions have been established for governance; while religion and systems of family/community education have assumed the role of instilling moral order.

28 October 2006

Social capital and the moral order

Where does moral order come from? Do we really have, as Marc Hauser proposes, an automatic ability to distinguish right from wrong? Is our moral order really a function of our biological evolution? What are the implications of a breakdown in the moral order in a society?

Some seven years earlier, in his book, ‘The Great Disruption: Human Nature and the Reconstitution of Social Order’, Johns Hopkins professor Francis Fukuyama had suggested something similar: that human beings were biologically driven to establish moral values. That these values evolved from the ground up, rather than being imposed by government or organised religion.

If this theory is true, does it mean that the growing corruption that worries Indian youths today (see my previous post) is really a reflection of themselves?

Professor Fukuyama’s book did not mention India. It occupied itself with the United States and Western society, specifically with their development in the past 50 years. However, professor Fukuyama did talk about social capital as a key building block of modern society, something that permitted cooperation and trust within its members. He suggested that as social capital depleted in a society, so did cooperation and trust within its members. This, in turn, resulted in an increase in family break-ups, drug use, crime, and other anti-social behaviour.

Could this theory be true? If social pathology was an indicator – a measure, perhaps – of overall social trust, what meaning did it have for the level of corruption in India today?

[Professor Fukuyama defined social capital as “a set of informal values or norms shared among members of a group that permits cooperation among them. If members of the group come to expect that others will behave reliably and honestly, they will come to trust one another.”]

26 October 2006

Apathy

Sometime ago, as I stepped out of a retail store in Mumbai, I was confronted by a group of college students who wanted to interview me on the growing corruption in our country.

I’m afraid I gave them all the wrong answers: that corruption is a fact of life; that it’s everywhere, and not confined only to India; that money corrupts, power corrupts, sex corrupts, materialism corrupts, our greed corrupts; that the situation is likely to remain so despite our efforts to reduce it; that I had no idea how corruption could be managed apart from public awareness and action.

Looking back, I feel ashamed.

How could I have been so insensitive to those students and their desire to bring about a better life for everyone? Isn’t corruption a social evil we could all do without? At least, those students were attempting to address the issue of corruption in India. With all my maturity and experience, I was doing absolutely nothing about it.

If those students could not rely on the support of elders like me, who could they turn to? What kind of social and moral order was I presenting to those students? Has corruption permeated to such levels of morality that people like me actually give our consent to corruption by not challenging it? Does that mean my moral values need re-evaluation as well?

I believe people, individually or in groups, create social and moral orders for themselves as a natural process. I believe they create these orders on common shared values that build societies and civilisations. I believe India is a prime example of this, and she has a 5,000-year-old heritage to prove it.

I believe, in recent years particularly, the media in India has helped broadcast these common shared values to millions of people across the country, accelerating India’s growth. I believe this has resulted in positive shifts in values within families, within communities, at work, and between partners.

Yet, I see a breakdown of the family structure that India was so proud of. I see a hedonistic individualism replacing common shared values. I see ‘personal gains’ as the driving force behind the new social and moral order. And, for the weak, I see apathy as its greatest corrupting influence.

24 October 2006

Morality: a collective possession

According to recent theories, such as those proposed by Harvard professor Marc Hauser, the human capacity to make moral distinctions – right from wrong, good from evil – is programmed into the brain.

Across the globe, people follow the same basic rules of morality – a limited set of rules which professor Hauser calls ‘moral grammar’. However, professor Hauser cautions us, the actual moral choices of the individual person also depends on how the culture (in which the individual grows up in) uses this grammar, as well as on the emotions the person experiences when he sees others contradict what he has learnt to believe is good.

Could this mean morality is something we have as an existential possession? An inheritance from our forefathers that guides the way we live our lives today? Could morality, like language or ethnic identity, be a sort of collective possession of groups of people, binding them into socio-cultural orders?

If so, then morality may be a key factor in the construction and maintenance of these communities and individual cultures. That might explain why family values, the law, the education system, the interpretation and practice of religion, the media vary from one culture to another.

If we study films as representations of culture and morality, we find many differences between India and the rest of the world. Indian films do not explicitly show on-screen kissing or sex. Topics like homosexuality or incest are taboo. Even within India, at a regional level, there are clear differences. For instance, films from the state of Tamil Nadu in South India are permissive, showing a lot of skin on their female actors; while films from Kerala, a state next door, are more modest, preferring to stay away from any overt display of skin.

Why is this so? Perhaps, morality is as much a consequence of a community’s sense of self, cultural upbringing and existential well-being – a collective possession, so to speak – as it is a universal feature of programming in the human brain.

22 October 2006

The fact of the matter

Is there such a thing as a moral fact?

There are facts that prove things to be true – at least as we know them now. For instance, all things fall due to gravity. The Earth is round and not flat as it was once thought to be. A liquid takes the shape of the container it is held in. There are 60 minutes to the hour. Sugar is sweet to taste. Men are different from women in their physical appearance.

Evidence can be provided as proof of such facts; and rational people accept such evidence, proof and facts in their daily lives. For, these facts guide our actions.

However, when it comes to morals, life becomes complicated. Facts are no longer facts – measurable quantities or proven axioms – but issues guided, or influenced, by our personal interest, our appetite for or aversion to risk, and strategic considerations with a view to a larger or long-term goal.

Therein lies the rub. For example, even if we know that causing intentional harm to others is morally wrong, we may still decide to act upon it.

Although a child knows he should not tell a lie, he may still do so to protect himself or a friend from punishment. Although an adult knows that he should not tell a lie, he may still do so in order to avoid hurting others, or to avoid creating an ugly scene at a specific moment, or to take advantage of a situation.

How does the child or the adult take such decisions – sometimes instantaneous, almost instinctive, decisions – on morality and stray away from, or even violate, social or parental teaching, knowing full well the facts of the matter?

Faced with a moral question, people, both young and old, apply their minds from the point of view of what they believe is the nature of the situation at hand – in other words, the facts of the matter – and make up their minds, sometimes instantly, to act in a specific manner.

The process of making up their minds – and, in turn, the moral outcome – is, of course, guided, or influenced, by their personal interest, their appetite for or aversion to risk, and strategic considerations with a view to a larger or long-term goal.

Then there’s the issue of human instinct, as well as the theory of moral grammar that Harvard professor Marc Hauser talks about.

The fact of the matter about what is right or wrong, about what morally ought or ought not to be done, or how people will act in a given situation, are not easy to determine, nor explain.


[Citation: ‘Moral Realism and Cross-Cultural Normative Diversity’ – a paper by Edouard Machery, Daniel Kelly, Stephen P. Stich]

20 October 2006

Moral choices

Are there principles in life that teach us how to differentiate between good and evil? If there are such principles, are they evolutionary – genetically programmed in us – and remain common for all of us on this planet? Or, do they change from one culture to another?

Some sociologists, psychologists, evolutionary biologists and anthropologists – like Harvard professor Marc Hauser – believe there is a universal ‘moral grammar’ underlying all specific moral norms that different cultures embrace. For example, everywhere, people recognise universal values such as fairness, responsibility and gratitude – and oppose cruelty, unfairness and oppression. This ‘moral grammar’ is programmed into our brains as a part of our evolutionary process.

Moral choices depend on how each culture uses this basic ‘moral grammar’. If we reward those brains that mean well for our culture and us, we are likely to create civilisations which promise a bright and peaceful future.

18 October 2006

Once upon a time...

Once upon a time, when man had just evolved from apes to take on the human form, there was no concept of good or evil, right or wrong. The concept of moral or immoral behaviour did not exist in man’s consciousness at that time. Man acted on impulse, with no thought for the consequences of his actions… on others.

Gradually, man began to identify with himself; then with his family, his community, his tribe. His conscience was awakened. He became conscious of doing good, as different from doing harm, to others. He began to understand that some of his actions were right; some wrong. He began to distinguish right from wrong, good from evil.

And man understood that right or wrong, good or evil, moral or immoral behaviour are concepts… each depending on the other for its existence.

16 October 2006

The important thing to do

It’s all right to let millions of people die from starvation, AIDS, pestilence and war – or a combination of these – while we enjoy an easy life, consuming more and more to satisfy our inner desires. After all, it’s not directly affecting us. Not at the moment. And, the way things are, it’s not likely to do so, soon. So, let’s carry on with our lives just the way we have been.

Or, maybe we should stop and listen to people like Bono (see my earlier post) and a thousand others who campaign against life’s inequalities – for others, not themselves. Maybe we, too, should have the courage and the motivation to step up to do something about what we feel is not right in this world. To think of others, and not just ourselves. Simply because, as human beings, it’s an important thing to do.

Philosophers, historians, social scientists and religions have debated over morals for centuries. And, we are still left with our own conscience. What may be unfair to us may not be so to others. Even the degree of unfairness in a specific situation – the gravity of it – may be debated. This measuring of a situation’s unfairness may divide us into those who act against unfairness and those who don’t. Or, into those who believe someone else should take action, but not us. Or, into those who intend to take action – but, perhaps, not right away.

Then again, what about our feelings? Our passions or our compassion towards another human being or another life? Or, this planet? What about our anger or our excitement at seeing something unfair or cruel happening right before our eyes? What of that? What do we do then?

What is right? What is wrong? How do we decide right from wrong? And, what do we do after that? These questions may dog our minds and take away our sleep at night, but they come later. Much later. What’s important is what we do before our powers of reasoning cloud our minds against unfairness or injustice or cruelty or suffering. Perhaps these should be motivation enough for us to take action.

14 October 2006

When will Indian brands turn (RED)?

This is an excerpt from The (RED)™ Manifesto:

“All things being equal, they are not.

As first-world consumers we have tremendous power. What we collectively choose to buy, or not to buy, can change the course of life and history on this planet.

(RED) is that simple an idea. And that powerful. Now, you have a choice. There are (RED) credit cards, (RED) phones, (RED) shoes, (RED) fashion brands. And no, this does not mean they are all red in colour. Although some are.

If you buy a (RED) product or sign up for a (RED) service, at no cost to you, a (RED) company will give some of its profits to buy and distribute anti-retroviral medicine to our brothers and sisters dying of AIDS in Africa.

We believe that when consumers are offered this choice, and the products meet their needs, they will choose (RED). And when they choose (RED) over non-(RED), then more brands will choose to become (RED) because it will make good business sense to do so. And more lives will be saved.

(RED) is not a charity. It is simply a business model.”

Today AIDS is a preventable, treatable disease. Yet 5,500 people are dying from AIDS in Africa everyday. (RED) hopes to address this issue and save lives in Africa.

(RED) is an idea started by Bono (of Irish rock band U2) and Bobby Shriver, Chairman of DATA, earlier this year, to engage consumers and the corporate world with its marketing prowess and funds to fight AIDS. The campaign intends to raise awareness and money for The Global Fund by teaming up with the world’s most iconic brands to produce (PRODUCT)RED branded products. A percentage of each (PRODUCT)RED product sold is given to The Global Fund. The money will help women and children affected by HIV/AIDS in Africa.

The parentheses, or brackets, are used to indicate ‘the embrace’. Each company that becomes (RED) places its logo in this embrace and is then elevated to the power of red. Thus, the name (PRODUCT)RED.

Yesterday, 13 October 2006, (RED) partners Gap, Converse, Motorola and Apple launched their (PRODUCT) RED™ products in stores in the United States and online. (RED) is supported by various celebrities such as Oprah Winfrey, Steven Spielberg, Penelope Cruz, Don Cheadle, among others.

In India today, the story of HIV/AIDS is as serious, if not worse. The following report is from AVERT, an international AIDS charity working in India:

“India is one of the largest and most heavily populated countries in the world, with over one billion inhabitants. Of this number, at least five million are currently living with HIV. According to some estimates, India has a greater number of people living with HIV than any other nation in the world.

HIV emerged later in India than it did in many other countries, but this has not limited its impact. Infection rates soared throughout the 1990s, and have increased further in recent years. The crisis continues to deepen, as it becomes clearer that the epidemic is affecting all sectors of Indian society, not just the groups – such as sex workers and truck drivers – that it was originally associated with.

In a country where poverty, illiteracy and poor health are rife, the spread of HIV presents a daunting challenge.

There is disagreement over how many people are currently living with HIV in India. UNAIDS (the United Nations agency that co-ordinates global efforts to fight HIV) estimates that there were 5.7 million people in India living with HIV by the end of 2005, suggesting that India has a higher number of people living with HIV than any other country in the world.”

What India needs today is a project similar to (RED). Indian brands and consumers need to come together to address the issue of HIV/AIDS and help rid – or, at least, reduce – the threat of this disease that is spreading across our country.

Will they? If not now, when?

13 October 2006

India’s quiet revolution

It’s a shame that many in India think advertising is a bad thing. That it’s distasteful. That, being at the cutting edge of capitalism, advertising is tool to make people buy what they don’t need. That India, struggling with her poverty and her low per capita income, can do without it. This belief, a fallout from our pre-Independence days when Indians had a negative disposition towards anything British, is partly responsible for the poor growth of regional-language newspapers in India. Advertisements in newspapers were considered such a British capitalist thing… to be discarded and trodden upon.

And so, for a long time in India, English newspapers walked away with all the advertising money, leaving regional-language newspapers to flounder like fish out of water. Printing machinery supporting Indian language scripts were difficult to find. Cost of newsprint was high (Indian-language newspapers consumed more newsprint as the scripts took up more space for the same amount of copy). And the distribution channel – the newspaper hawkers – took away a fair share of the income. Income from circulation was too little to sustain a regional-language newspaper on its own.

The central and state governments stepped in to rescue the situation and, despite criticism against them, became large advertisers for India’s regional-language newspapers. Leading the show was central government’s Directorate of Advertising and Visual Publicity (DAVP), which still contributes a substantial amount to regional-language advertising, with individual state governments joining in in support of their own vernacular newspapers. Even then, it wasn’t much. Post-Independence, the bulk of advertising spend came from commercial advertisers; and they believed that people with purchasing power could only be reached through English newspapers and magazines. This belief ruled advertising spends in India until recently.

In the eighties, with offset printing technology supporting their enterprise, regional-language newspapers and magazines finally saw a turn in the tide. They could print more, faster and cheaper than ever before; and distribute their publications in the rural markets of India where English newspapers had no presence. With over 70% of India’s population living in rural India, advertisers saw this as an opportunity, re-designing and re-packaging their existing products, or creating new ones, to address the needs of the rural markets… spending good money on advertising in vernacular publications. Circulation figures for regional-language newspapers soared and, with it, advertising revenues.

The Indian-language newspapers gained prominence, fulfilling a dual role. They covered local news, incorporating the culture of the region and connecting with their readers. At the same time, they brought national (and international) news and information about products and services into the household of the rural Indian. Both regional/local as well as national advertisers woke up to this fact and increased their advertising spends in vernacular newspapers. According to Aroon Purie of India Today, in the Frames 2006 conference organised by FICCI in March this year, there is a heavy growth in vernacular publications, with Hindi publications growing at 68% and Telegu publications growing at 63%.

However, all was not hunky-dory for the vernacular publications. At the same conference, Jacob Matthew of Malayala Manorama stated that, during 2002-04, although advertising contributed 60% of revenues, most vernacular newspapers dependent on revenues from circulation as well – some even for their survival. Apparently, of the total advertising pie of Rs.10,900 crores for print advertising in India (from a PWC-FICCI study which was launched in Frames 2006), English publications still cornered 50% of advertising revenues. Perhaps, we haven’t got over our English hangover yet.

Nevertheless, a 50% chunk of the print advertising revenue pie is a huge gain for regional-language newspapers. With growth pegged anywhere between 12-16%, it’s likely to overtake spends in English publications very soon. I’d say, this is nothing short of a revolution.

[Here’s more data from Frames 2006: S K Arora, Secretary of the Information & Broadcasting Ministry, commented that print readership in the urban sector was flattening out, but there were superb gains in the rural market. These gains were coming from approx. 47,000 Indian-language publications, of which 22,000 were in Hindi and the rest in 90 regional languages. (Apparently there are 9,000 English publications in India.)]

11 October 2006

The story has just begun

Very few people can read in English in India. My guess is, of the literate population (people who can sign their names – approx. 2 out of 3 Indians), it’s 5%. That would mean around 40 million (give or take 5 million) people know English well enough to read English newspapers in India. Still, the English newspaper industry is flourishing, garnering a substantial part of the advertising revenues. That’s reason enough for several Western media companies to invest in India (see my previous post ‘India assures a future for print journalism in English’).

This bias towards English newspapers was, perhaps, expected. The issue is language: India uses 18 different languages in 10 different scripts, much of it distributed geographically. No single Indian language can reach out to all of India. Indian-language newspapers have always suffered from this. Moreover, printing technology had responded rather late to India’s need for available scripts. There was a lack of mechanical typesetting facility for composing in Indian languages. Yet again, the problem was expected.

Indian languages are complex, with many more letters to their alphabet (a dozen or so more) compared to English or the Roman script. Then, the shapes of the letters created a dilemma, sometimes with two or more letters combining to create another letter-form. If this isn’t enough, Indian language scripts use space both above and below the line, taking up more vertical space (on an average, Indian scripts take up 15% more space than Roman scripts while setting a block of text – i.e. slightly fewer words into a given space Roman scripts). Although hot-metal casting of type in the Roman alphabet was in regular use by the 1880s, it took another 50 years to perfect a similar technology for Indian languages.

In spite of these drawbacks, India did have its language newspapers as early as 1821 when Raja Rammohan Roy in Calcutta published the first-known Indian newspaper in Bengali called Sangbad Kaumudi. This newspaper, unfortunately, is long forgotten. In 1822, Fardoonjee Marzban in Bombay published Mumbai Samachar in Gujarati, which is published even today, making it the earliest Indian newspaper on record today. The first English newspaper in Bombay was printed much earlier in 1777 by Rustomji Keshaspathi (I’m not sure what it was called).

Offset printing technology and photocomposition of type came to India in the late 1960s, once again, with a bias towards the Roman script. It took another 20 years to become popular with Indian-language newspapers as the computers and software necessary to use offset printing technology was not readily available in the Indian languages. With offset presses in place, the late eighties saw the beginning of a printing revolution in Indian-language newspapers, the benefits of which are visible today – not only in news and editorial, but also in advertising. For the Indian-language newspaper, the story has just begun.

08 October 2006

Yet another dilemma

“You were who you were because of the language and dialect you spoke, the location of the village of your male ancestors, the family and religion you were born into. I was a Bengali and proud of it, which meant that I claimed as heritage a culture distinct from that of a Bihari or a Punjabi or a Gujarati or a Tamil. That’s the way we were brought up in Calcutta in the Fifties. We were encouraged to set ourselves apart from people of other Indian states.”
(from ‘An Interview with Bharati Mukherjee’ by Tina Chen and S X Goudie, University of California, Berkeley, 1997)

I wonder if things have changed much in India since the Fifties as far as this observation goes in defining our identity. In India today, we still identify ourselves with our father’s name and his cultural/ethnic background. Mind you, this is not an extraordinary notion contained within India. All over the world, after defining their nationality, people do make a reference to their roots/origins – indicating a consciousness that people carry about their social identities.

The significance of this social identity increases with migration – of the kind India is experiencing today as millions of people from the rural setting are moving into the cities (see my previous post). Migration leads to fusion of cultures, interracial marriages and, perhaps, more confusion over identities. Governing all of this is a history of a nation of poor people. For marketers trying to champion their cause in such a marketplace this is a real challenge.

How do marketers win the hearts and minds of consumers who are so diverse that reaching out to them means communicating in 18 different languages? How do marketers identify and use the myriad images, colours, symbols and nuances that are the building-blocks of a hundred different cultures? How do marketers create demand in millions of consumers who have been used to using wood and coal and kerosene as their source of energy?

Despite the much-flaunted modern India of mobilephones, fast cars and Western fashion, India has retained many of its former characteristics. A sense of self-identity as defined by our place of birth and our male ancestors, which Bharati Mukherjee spoke of, is one such critical characteristic that defines our buying habits and consumption patterns. For marketers, the challenge is to understand this and find innovative ways of blending India’s past with the future through sustainable marketing and advertising campaigns.

It’s yet another dilemma for the marketers. Though one thing is for sure, India’s regional/local language media is likely to benefit from this the most.

[Bharati Mukherjee is the author of four novels, two short-story collections, and two works of non-fiction (co-authored with her husband Clark Blaise), and a well-known creator of immigrant literary theory “that embodies her sense of what it means to be a woman writer of Bengali-Indian origin who has lived in, and been indelibly marked by, both Canada and the United States.”]

07 October 2006

India’s cities: a life source for millions

As India’s urban affluent head out on the highways for a joy ride in their fast cars (see my previous post), millions from India’s over 600,000 impoverished villages move into the cities for a daily living. Travelling in trucks and buses, on trains and even on foot, they migrate not only to the nearest town but also to distant cities (people move from Orissa to Delhi, Bihar to Surat, Kerala to Mumbai), offering themselves as cheap labour to their new city employers.

Working for 50 rupees a day (just over US$1), they take their chances with the new economy that has permeated everything they see, leaving behind their fields and their farming forever. Typically, they look for factory jobs or jobs at construction sites, in groups, sharing meals, spending nights in most squalid conditions. Supply outstripping demand, many are found begging on the streets or turning to crime. If they get lucky, they become street vendors, owing their allegiance to a local ‘boss’.

Of course, not all migration is of this type. There are others: smarter people, with some education, coming to the city with a friend’s contact, staying eight to a room. They take up jobs in shops or offices (recent attractions have been India’s retail and service sectors); become drivers for cars, auto-rickshaws or tempos (India’s version of utility vehicles); perhaps join the police force; or, in a place like Surat, become a well-paid diamond polisher. They get absorbed by the city, straining its resources, and, in turn, over the years, changing the city’s demography and culture.

There’s a churn in the villages too. With people moving out of villages, the village landlord’s power base is slowly withering. Moreover, when these city migrants return home for a visit, they not only bring with them money for the family, but also a new outlook to life and self-governance. They share their stories and experiences of their struggle, of sustenance, of self-reliance, of consumerism, of indulgence, of aspirations… giving hope to millions who, in turn, look to India’s cities as a life source.

05 October 2006

The road to glory

There was a time when, to an Indian, owning a car was an achievement. Those who owned one were rich; those who owned more than one were filthy rich. There were two types of cars to choose from: Fiats and Ambassadors. Their shapes, their engineering, even the colours they were available in, didn’t change much over the years. Whoever owned a car owned one of these models, moving around town with an air of superciliousness. The rest of India either admired them or hated them.

In the 1980s, things changed with the entry of Maruti Suzuki. They introduced a small car and a small van – Sanjay Gandhi’s dream, although no-one gives him any credit for it – in bright colours, with low noise, faster pick-up and better fuel efficiency than the Fiat or the Ambassador prevalent on the roads at that time. Marutis became extremely popular, overshadowing attempts by the Contessa and the Standard 2000 to create a luxury segment in the Indian auto industry. Millions of Indians began to see new possibilities. For the middle-class, owning a car in the future was going to be a reality.

After a 15-year journey, that future is here. Indians are buying cars left right and centre, with a million new cars coming on the roads every year in various models, sizes and colours. Not just Indian brands, but foreign car companies are now in India with their manufacturing and marketing facilities, introducing new models every three years. India has now become one of the fastest-growing car markets in the world. Cars have become the new symbol of recognition in society – a metaphor for status and privilege, and the new consumerism that has flooded the country.

With better engineering, the new cars also represent speed. To accommodate this, India is now gearing up to revamp her roads and highways. Cities are expanding their roads, building flyovers, and installing better traffic-control systems. Still, progress is slow and congestions on city roads are worrisome, sometimes frustrating, particularly during rush hours.

The situation in Bangalore, for instance, is so bad that, on several occasions, travelling to and from the airport – a distance of 10 kms – has taken me more than an hour. People in Bangalore are infuriated with the traffic. Media reports have voiced that well-known business houses in Bangalore are planning to curb further investments in the city and move elsewhere, in an effort to highlight the government’s lack of concern in addressing the traffic-congestion issue.

However, some Indians are unfazed. They are taking to the highways with their fast cars. Going out on a 200-km road trip over the weekend, driving between cities or between a city and a holiday resort, instead of the usual train, is becoming common these days. Even when the highways weren’t so good, in 2001, I had done the 1100-km journey between Mumbai and Bangalore (and back again) in less than 18 hours. Although slow by Western standards, believe me, it was a pleasure!

Now the Indian government has stepped in, upgrading the national highways into 4 or 6-lane traffic. A major project is building the Golden Quadrilateral, a superhighway that connects Delhi, Mumbai, Chennai and Kolkata. Some portions are already over, welcoming fast cars and motorbikes with open stretches never experienced before. For Indians with fast cars, looks like the glory days are here.

03 October 2006

So many mouths to feed, but who’s complaining!

“India represents an economic opportunity on a massive scale, both as a global base and as a domestic market… Recent times have seen an awakening of interest in what India has to offer to global businesses.”
(A September 2005 KPMG report, ‘Consumer Markets in India – the next big thing?’)

With over a billion people, you can imagine how much food is consumed in India everyday. One estimate says that, every year, India produces one ton of food for every single inhabitant. Got that! In spite of many people living below the poverty line, some starving to death, India is a huge consumer base for the world’s food and drinks market. Since this market is unorganised, with millions of small self- or family-owned shops, it’s difficult to estimate its size. A 2004 estimate by FICCI pegs it at US$70 billion for food alone. Beverages are likely to account for another US$80 billion at the least, not including alcohol.

The interesting thing is, most food consumed in India is domestically produced. India is the world’s biggest producer of livestock, the biggest producer of milk, and the second largest producer of fruits and vegetables. With a strong vegetarian population, India consumes very little meat or fish (much of India’s marine production is exported) compared to Western or other emerging nations. However, poultry is popular among non-vegetarians. Consumer spend on food varies, but an average Indian family spends half its income on food. This figure, of course, reduces with higher incomes.

Traditionally, Indians are used to consuming fresh food, shopping for fresh produce daily. So far, there’s been little fascination for processed or packaged food (around 2% of India’s agricultural output is processed). But, with globalisation and larger disposable incomes, lifestyles and consuming habits of Indians are changing. Now you can find anything from branded atta to bottled water, coffee to confectionery as packaged food on the shelves at retail stores across the country. The trend is apparent – more and more Indians are choosing to take home processed or packaged food everyday, along with their fresh groceries.

Since 80% of consumer spends in India are estimated to be on FMCG (household and personal-care products, confectionery and tobacco), even a quarter of that spend on processed or packaged food could open up a market of unbelievable proportions. Both Indian and global brands see this as a huge business opportunity, even in the face of government restrictions on direct entry of global brands in the retail sector. Homegrown Indian, as well as MNC, brands are ruling the roost at the moment, but soon most global packaged food brands are expected to be here.

Marketing has already begun in the urban centres and other towns, addressing a population of approx. 250 million. The rural markets are next, hopefully, adding another 250 million to the consumer base in the next 10 years. Opening up to the world a market of 500 million consumers! With so many mouths to feed, nobody’s complaining!

01 October 2006

India assures a future for print journalism… in English

While the rest of the world is talking about the death of the printed newspaper (thanks to the Internet), India is on a new high. English-language newspapers in India are growing in circulation as well as in advertising revenues, and there’s talk of new publications coming up in the future. Already Mumbai has seen the launch of Hindustan Times, DNA and Mumbai Mirror, while Mumbai’s Mid-day has recently launched an edition in Bangalore.

But, there’s more. The International Herald Tribune started printing in India earlier this year. The Financial Times picked up a stake in Business Standard and plans to launch a South Asian edition next year. The Hindustan Times group, with branded content from the Wall Street Journal, is planning to launch a business title. Dainik Jagran (a Hindi newspaper group) is planning to print an international edition of the Independent (in English) sometime soon.

India’s regional language newspapers are growing too (even a 1% rise in India’s literacy level impacts newspaper readership), but it’s the English readership segment that the world is watching. What’s the reason for India’s success? The answer lies in India’s fast economic growth and the impact of globalisation – both contributing to a higher standard of living and a Westernised lifestyle. As an outcome, English language is used by more and more people these days, naturally increasing the demand for news in English.

Moreover, Indian newspapers are not threatened by online news and analysis as newspapers in the West are. With low penetration of the Internet – under 5% in India, compared to 69% in the US and 63% in the UK – Indian newspapers are enjoying a double benefit: Not only are they gathering new readers almost every day, they don’t have to compete with online advertising rivals as their Western counterparts do. On the contrary, for English-language newspapers, advertising revenues are soaring.

In the international market, this revenue looks small at the moment. But, with India’s huge population, it promises to grow to substantial levels… in a market which is rapidly shrinking internationally. In the world of print journalism, this looks like some sort of a magical turnaround. One hopes, India will keep her promise.


[Citation: The Guardian, 18 September 2006, ‘India is where the action is’ by Randeep Ramesh (requires user log in)]