post

Why Google+ works (UXtraordinary blog preview)

Excerpt from UXtraordinary:

I am thrilled to see Andy Herzfeld’s social circles concept implemented so beautifully in Google+. My semi-educated guess is that empowering users to define access to themselves according to their own purposes and needs—in context—will build engagement and strong loyalty. Why? Because it comes naturally.

Too many social media-based sites, including many social networks, provide only lip service to how users think about groups, let alone user privacy empowerment. A user is seen as one of their members, and the business creates a mental model in which the user is the center of a series of widening circles. “Empowerment” of user content privacy is typically limited to enabling permissions control within those circles.

Users don’t see themselves that way. People think of sharing information in terms of a constantly changing algorithm of need, purpose, and ability. We trust some friends more closely than others; we have acquaintances who know a great deal about us whom we barely know. It’s my belief that these clashing mental models are one of the primary reasons social media and social networks fail to engage.

Mental models of user grouping
Read more
.

post

Social terminology

When you’re in a business, the jargon is like the air—you just don’t notice it. Then you listen to people outside your field using the same terms, and suddenly you see, yes, that can be confusing.

Recently someone asked, on a fairly tech-savvy mailing list I participate in, how best to describe social networking in a nutshell. I realized that I see a lot of confusion at times about this (in general, not on the above list), possibly because of how terms like social network and social media overlap. So I offered the following, which I thought I’d share here:

  • Social media is the set of communication features (sharing, reviewing, blogging, message boards, comments, etc.) used by many sites. For example, Amazon is not a social network, but helped popularize using social media to improve both user experience and the business.
  • Social network (noun) refers to a site dedicated to social purposes. It can also refer to a personal social network. A social network site’s social media features don’t merely supplement or enhance its business, they are the business.
  • Social networking (gerund) is the process of developing a social network, typically a personal one.
post

How not to be overwhelmed by data

When dealing with vast amounts of data, how to prioritize it? Lois Beckett reports this approach, from John O’Neil, curator of the New York Times’ topic pages:

The most pressing criterion for what gets updated first, O’Neil said, is whether “we would feel stupid not having it there.”

Guess that’s as good a standard as any ;–)


Beckett, Lois (23 Feb. 2011). The context-based news cycle: editor John O’Neil on the future of The New York Times’ Topics Pages, Neiman Journalism Lab, Harvard.

link

Reveals the logic behind your HTML/XHTML code. W3C semantic data extraction.

Try it with these:

Why does this matter? Because as search engines and other information trawlers grapple with more and more content, the ability to parse and understand that content continues to become more and more important.

post

How to get the best idea

This was written in response to Ross Douthat’s call for Americans to stop enabling M. Night Shyamalan. We all love him, we all want him to create another wonderful movie, and his lukewarm box office response is not enough to push him to abandon his current approach and do that. A complete flop, Mr. Douthat reasons, will be the rock bottom necessary before Mr. Shyamalan can begin true change.

He may be right. This prompted me to share my personal theory on Mr. Shyamalan. It’s also my personal theory on how to get the best idea, whether you’re writing a story, designing a test, or figuring out the best user-centered taxonomy for a site.

I love Shyamalan’s writing, I love his directing, I love his characters. What I don’t love is how he makes all of the above subservient to an idea that forces them into unnatural, un-storylike forms.

My theory on Mr. Shyamalan: he’s been letting themes take too much control of the story. It’s like the recently-evangelized musician who thinks removing a bad lyric about Jesus is somehow betraying Jesus—instead of thinking that writing the best song possible is a better form of praise. Or the recently-transformed-by-therapy writer whose characters all act in the best interests of each other’s mental health. I believe Shyamalan becomes enthralled by an idea, and the idea drives the story, the writing, and the directing.

I heard that the killer surprise in Sixth Sense was an afterthought. This is how good ideas show up—you follow the story/research/whatever faithfully, and have faith the best idea, or juxtaposition of ideas, will present itself as a result. Clinging to *an* idea throughout the process reduces the chance that the *best* idea—the one rooted in your fullest understanding of story, characters, data, concept—will emerge.

The best way to explore a theme in a story is to establish the situation, and let it play itself out in a means true to the characters. Your concept or theme will emerge much more naturally, and people will be much more engaged.

post

The biggest barrier to UX implementation

My response to a LinkedIn UX Professionals question, Why they don’t like to spend or invest in the User Experience tasks?

My personal experience has been that ignorance is the largest barrier to UX implementation. While there are many exceptions, too often do developers, marketers, executive management, or others with a large level of control over UX strategy and tactical development feel that user experience is simply “common sense.” They believe that they are users, and therefore they have insight into the process. This is natural.

It’s the responsibility of UX professionals to educate them and evangelize the value of user experience. (Though it’s nice if you can get executive support, it’s frequently not there.) At my current company, I approached this from several angles:

  • I held one-on-one meetings with stakeholders and others, seeking to understand their needs and start a conversation about possible UX solutions.
  • I wrote and presented brown bags, open to all, on subjects like Why Taxonomy Matters: Taxonomy and the User Experience, in order to promote understanding of UX and its considerations.
  • I introduced concepts designed to make people think more from the user perspective. For example, like many sites we’re interested in user-generated content. I expanded this to user-generated experience (a concept I’d already developed from previous social media work and user analysis), and measured/discussed user-generated activity. The point, of course, was that thinking about user activity required thinking about user flow and perspective. Eventually key stakeholders were talking about UGA as a matter of course, and we even discovered ways to convert some UGA into UGC.

This was successful enough that UX became a standard consideration in not just design, but product strategy. It is of course beyond your control what others do with your information – but you have to provide it!

People understand success. Show your co-workers and management how UX solves their problems. Provide numbers, using performance indicators that matter to your audience. Present before/after case studies. Remember to focus on solutions, not problems (never show a problem for which you don’t have a suggested solution). In short, provide the best possible user experience for your internal customers.


Update

Ahmed Kamal, the person who posed the question, responded positively to my comment:

Alex O’Neal, I raise my hat! I appreciate it really! your comments are really touching, reflecting a real long experience, comprehensive and concluding the problem and how to solve it!!

Aw, shucks :–)

[Cross-posted on UXtraordinary.]

link

The BBC’s style guide. An exceptional, best-in-class example of what a style guide should do.

Also of interest: a BBC’s blog post on their goal of a new global visual language .

post

Good blog from founder on IxDA Sturm und Drang

Off-site comment capture of the moment:

David Malouf, one of the founders of IxDA (the Interaction Design Association), has heard a lot of grief about a recent site redesign. He shared his thoughts on the subject in an open letter . Definitely worth reading.

My comment:

Personally, I see great improvement in the new site. I wasn’t able to contribute to it myself, but I’m impressed at what was done through donated, as-possible time by many people.

I think a lot of the hostility is probably misdirected energy from the economy, a lack of empowerment at work, a need to set themselves apart in the job market, etc. Sadly, it’s always easier to put down another’s work and show “expertise” with criticism, rather than to do something well yourself. (This is not to say that criticism isn’t a valid skill, just that when it exists in a vacuum it’s not very helpful.)

What about raising funds via IxDA-backed training? Leverage local IxDA members to provide reliable training in specific areas. IxDA sets the standards and accepts the payments; the trainer gets the street cred of IxDA backing, and half the money (via PayPal or whatever).

I’ll volunteer to author/present on how taxonomy affects interaction design :–)

post

Saving face for the other guy works

Off site comment capture:

Schott’s Vocab , The New York Times, requested face-saving excuses readers regularly turned to. Mine actually got marked as an Editor’s Highlight (rare for me), and received a whopping (again, for me) 15 “recommended” clicks ;–)

In a reverse approach, but still aimed at smoothing out the occasional awkwardness:

As someone with a lot of diabetes in the family history (though thankfully I’ve escaped it so far!), I avoid sugar in colas, etc. Sometimes I go through a drive-through and I’m uncertain the person gave me a diet soda (they didn’t mark the top of the cup, or didn’t repeat it back to me). But people typically don’t like their ability to perform simple tasks questioned, so I put the onus on me: “Did I remember to ask for diet soda?” They happily tell me yes, and I am reassured.

This “it’s my fault” approach works in a variety of situations.

post

Technological evolution in the Cambrian age

Another comment left in the wild, this time in response to the deeply wonderful Irving Wladawsky-Berger’s post about The Data Center in the Cambrian Age. I strongly recommend it. In the meantime, here’s what I had to say:

Great post! Reminds me of Stephen Jay Gould’s Wonderful Life, which discusses the explosion of fauna of the Burgess Shale. That book transformed my understanding not just of biology, but of creativity and human development. Since then I’ve observed this effect in other areas.

For example, movies of the ’20s and ’30s used techniques set aside in later decades, as the industry determined what they thought most appealed to the market. Ironically, some of these were then called innovative when re-used by more modern directors. Typography has gone through a similar pattern of evolution as well.

The interesting thing about an explosion of human technological innovation is that unlike competing animal species, whose success is just as largely due to chance as well as adaptation, humans can at least partly evaluate the value of a new idea. But in the marketplace, established companies using older approaches can crush new ideas and better approaches. Humans have to leverage the internet, governments, and their purchasing power to make sure we know *all* our options, and can choose the best one for our needs, not an abstract corporate entity’s profit line.

post

Thomas Huxley’s letter, on the death of his son

The below is from Leonard Huxley’s The Life and Letters of Thomas Henry Huxley, courtesy of Project Gutenberg. Quoted by Stephen Jay Gould.

A letter written in response to well-meant advice from Cambridge professor and priest Charles Kingsley.


14, Waverley Place, September 23, 1860.

My dear Kingsley,

I cannot sufficiently thank you, both on my wife’s account and my own, for your long and frank letter, and for all the hearty sympathy which it exhibits–and Mrs. Kingsley will, I hope, believe that we are no less sensible of her kind thought of us. To myself your letter was especially valuable, as it touched upon what I thought even more than upon what I said in my letter to you. My convictions, positive and negative, on all the matters of which you speak, are of long and slow growth and are firmly rooted. But the great blow which fell upon me seemed to stir them to their foundation, and had I lived a couple of centuries earlier I could have fancied a devil scoffing at me and them–and asking me what profit it was to have stripped myself of the hopes and consolations of the mass of mankind? To which my only reply was and is–Oh devil! truth is better than much profit. I have searched over the grounds of my belief, and if wife and child and name and fame were all to be lost to me one after the other as the penalty, still I will not lie.

And now I feel that it is due to you to speak as frankly as you have done to me. An old and worthy friend of mine tried some three or four years ago to bring us together–because, as he said, you were the only man who would do me any good. Your letter leads me to think he was right, though not perhaps in the sense he attached to his own words.

To begin with the great doctrine you discuss. I neither deny nor affirm the immortality of man. I see no reason for believing in it, but, on the other hand, I have no means of disproving it.

Pray understand that I have no a priori objections to the doctrine. No man who has to deal daily and hourly with nature can trouble himself about a priori difficulties. Give me such evidence as would justify me in believing anything else, and I will believe that. Why should I not? It is not half so wonderful as the conservation of force, or the indestructibility of matter. Whoso clearly appreciates all that is implied in the falling of a stone can have no difficulty about any doctrine simply on account of its marvellousness. But the longer I live, the more obvious it is to me that the most sacred act of a man’s life is to say and to feel, “I believe such and such to be true.” All the greatest rewards and all the heaviest penalties of existence cling about that act. The universe is one and the same throughout; and if the condition of my success in unravelling some little difficulty of anatomy or physiology is that I shall rigorously refuse to put faith in that which does not rest on sufficient evidence, I cannot believe that the great mysteries of existence will be laid open to me on other terms. It is no use to talk to me of analogies and probabilities. I know what I mean when I say I believe in the law of the inverse squares, and I will not rest my life and my hopes upon weaker convictions. I dare not if I would.

Measured by this standard, what becomes of the doctrine of immortality?

You rest in your strong conviction of your personal existence, and in the instinct of the persistence of that existence which is so strong in you as in most men.

To me this is as nothing. That my personality is the surest thing I know–may be true. But the attempt to conceive what it is leads me into mere verbal subtleties. I have champed up all that chaff about the ego and the non-ego, about noumena and phenomena, and all the rest of it, too often not to know that in attempting even to think of these questions, the human intellect flounders at once out of its depth.

It must be twenty years since, a boy, I read Hamilton’s essay on the unconditioned, and from that time to this, ontological speculation has been a folly to me. When Mansel took up Hamilton’s argument on the side of orthodoxy (!) I said he reminded me of nothing so much as the man who is sawing off the sign on which he is sitting, in Hogarth’s picture. But this by the way.

I cannot conceive of my personality as a thing apart from the phenomena of my life. When I try to form such a conception I discover that, as Coleridge would have said, I only hypostatise a word, and it alters nothing if, with Fichte, I suppose the universe to be nothing but a manifestation of my personality. I am neither more nor less eternal than I was before.

Nor does the infinite difference between myself and the animals alter the case. I do not know whether the animals persist after they disappear or not. I do not even know whether the infinite difference between us and them may not be compensated by THEIR persistence and MY cessation after apparent death, just as the humble bulb of an annual lives, while the glorious flowers it has put forth die away.

Surely it must be plain that an ingenious man could speculate without end on both sides, and find analogies for all his dreams. Nor does it help me to tell me that the aspirations of mankind–that my own highest aspirations even–lead me towards the doctrine of immortality. I doubt the fact, to begin with, but if it be so even, what is this but in grand words asking me to believe a thing because I like it.

Science has taught to me the opposite lesson. She warns me to be careful how I adopt a view which jumps with my preconceptions, and to require stronger evidence for such belief than for one to which I was previously hostile.

My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonise with my aspirations.

Science seems to me to teach in the highest and strongest manner the great truth which is embodied in the Christian conception of entire surrender to the will of God. Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing. I have only begun to learn content and peace of mind since I have resolved at all risks to do this.

There are, however, other arguments commonly brought forward in favour of the immortality of man, which are to my mind not only delusive but mischievous. The one is the notion that the moral government of the world is imperfect without a system of future rewards and punishments. The other is: that such a system is indispensable to practical morality. I believe that both these dogmas are very mischievous lies.

With respect to the first, I am no optimist, but I have the firmest belief that the Divine Government (if we may use such a phrase to express the sum of the “customs of matter”) is wholly just. The more I know intimately of the lives of other men (to say nothing of my own), the more obvious it is to me that the wicked does NOT flourish nor is the righteous punished. But for this to be clear we must bear in mind what almost all forget, that the rewards of life are contingent upon obedience to the WHOLE law–physical as well as moral–and that moral obedience will not atone for physical sin, or vice versa.

The ledger of the Almighty is strictly kept, and every one of us has the balance of his operations paid over to him at the end of every minute of his existence.

Life cannot exist without a certain conformity to the surrounding universe–that conformity involves a certain amount of happiness in excess of pain. In short, as we live we are paid for living.

And it is to be recollected in view of the apparent discrepancy between men’s acts and their rewards that Nature is juster than we. She takes into account what a man brings with him into the world, which human justice cannot do. If I, born a bloodthirsty and savage brute, inheriting these qualities from others, kill you, my fellow-men will very justly hang me, but I shall not be visited with the horrible remorse which would be my real punishment if, my nature being higher, I had done the same thing.

The absolute justice of the system of things is as clear to me as any scientific fact. The gravitation of sin to sorrow is as certain as that of the earth to the sun, and more so–for experimental proof of the fact is within reach of us all–nay, is before us all in our own lives, if we had but the eyes to see it.

Not only, then, do I disbelieve in the need for compensation, but I believe that the seeking for rewards and punishments out of this life leads men to a ruinous ignorance of the fact that their inevitable rewards and punishments are here.

If the expectation of hell hereafter can keep me from evil-doing, surely a fortiori the certainty of hell now will do so? If a man could be firmly impressed with the belief that stealing damaged him as much as swallowing arsenic would do (and it does), would not the dissuasive force of that belief be greater than that of any based on mere future expectations?

And this leads me to my other point.

As I stood behind the coffin of my little son the other day, with my mind bent on anything but disputation, the officiating minister read, as a part of his duty, the words, “If the dead rise not again, let us eat and drink, for to-morrow we die.” I cannot tell you how inexpressibly they shocked me. Paul had neither wife nor child, or he must have known that his alternative involved a blasphemy against all that was best and noblest in human nature. I could have laughed with scorn. What! because I am face to face with irreparable loss, because I have given back to the source from whence it came, the cause of a great happiness, still retaining through all my life the blessings which have sprung and will spring from that cause, I am to renounce my manhood, and, howling, grovel in bestiality? Why, the very apes know better, and if you shoot their young, the poor brutes grieve their grief out and do not immediately seek distraction in a gorge.

Kicked into the world a boy without guide or training, or with worse than none, I confess to my shame that few men have drunk deeper of all kinds of sin than I. Happily, my course was arrested in time–before I had earned absolute destruction–and for long years I have been slowly and painfully climbing, with many a fall, towards better things. And when I look back, what do I find to have been the agents of my redemption? The hope of immortality or of future reward? I can honestly say that for these fourteen years such a consideration has not entered my head. No, I can tell you exactly what has been at work. “Sartor Resartus” led me to know that a deep sense of religion was compatible with the entire absence of theology. Secondly, science and her methods gave me a resting-place independent of authority and tradition. Thirdly, love opened up to me a view of the sanctity of human nature, and impressed me with a deep sense of responsibility.

If at this moment I am not a worn-out, debauched, useless carcass of a man, if it has been or will be my fate to advance the cause of science, if I feel that I have a shadow of a claim on the love of those about me, if in the supreme moment when I looked down into my boy’s grave my sorrow was full of submission and without bitterness, it is because these agencies have worked upon me, and not because I have ever cared whether my poor personality shall remain distinct for ever from the All from whence it came and whither it goes.

And thus, my dear Kingsley, you will understand what my position is. I may be quite wrong, and in that case I know I shall have to pay the penalty for being wrong. But I can only say with Luther, “Gott helfe mir, Ich kann nichts anders.”

I know right well that 99 out of 100 of my fellows would call me atheist, infidel, and all the other usual hard names. As our laws stand, if the lowest thief steals my coat, my evidence (my opinions being known) would not be received against him. [The law with respect to oaths was reformed in 1869.]

But I cannot help it. One thing people shall not call me with justice and that is–a liar. As you say of yourself, I too feel that I lack courage; but if ever the occasion arises when I am bound to speak, I will not shame my boy.

I have spoken more openly and distinctly to you than I ever have to any human being except my wife.

If you can show me that I err in premises or conclusion, I am ready to give up these as I would any other theories. But at any rate you will do me the justice to believe that I have not reached my conclusions without the care befitting the momentous nature of the problems involved.

And I write this the more readily to you, because it is clear to me that if that great and powerful instrument for good or evil, the Church of England, is to be saved from being shivered into fragments by the advancing tide of science–an event I should be very sorry to witness, but which will infallibly occur if men like Samuel of Oxford are to have the guidance of her destinies–it must be by the efforts of men who, like yourself, see your way to the combination of the practice of the Church with the spirit of science. Understand that all the younger men of science whom I know intimately are ESSENTIALLY of my way of thinking. (I know not a scoffer or an irreligious or an immoral man among them, but they all regard orthodoxy as you do Brahmanism.) Understand that this new school of the prophets is the only one that can work miracles, the only one that can constantly appeal to nature for evidence that it is right, and you will comprehend that it is of no use to try to barricade us with shovel hats and aprons, or to talk about our doctrines being “shocking.”

I don’t profess to understand the logic of yourself, Maurice, and the rest of your school, but I have always said I would swear by your truthfulness and sincerity, and that good must come of your efforts. The more plain this was to me, however, the more obvious the necessity to let you see where the men of science are driving, and it has often been in my mind to write to you before.

If I have spoken too plainly anywhere, or too abruptly, pardon me, and do the like to me.

My wife thanks you very much for your volume of sermons.

Ever yours very faithfully,

T.H. Huxley.

post

The tyranny of dichotomy

An informational cascade is a perception—or misperception—spread among people because we tend to let others think for us when we don’t know ourselves. For example, recently John Tierney (tierneylab.blog.nytimes.com) discussed the widely held belief but little-supported belief that too much fat is nutritionally bad. Peter Duesberg contends that the HIV hypothesis for AIDS is such an error.

Sometimes cultural assumptions can lead to such errors. Stephen Gould described countless such mistakes, spread by culture or simple lack of data, in The Mismeasure of Man. Gould points out errors such as reifying abstract concepts into entities that exist apart from our abstraction (as has been done with IQ), and forcing measurements into artificial scales, both assumptions that spread readily within and without the scientific community without any backing.

Mind, informational cascades do not have to be errors—one could argue that the state of being “cool” comes from an informational cascade. Possibly many accurate understandings come via informational cascades as well, but it’s harder to demonstrate those because of the nature of the creatures.

It works like this: people tend to think it binary, all-or-nothing terms. Shades of gray do not occur. In fact, it seems the closest we come to a non-binary understanding of a concept is to have many differing binary decisions about related concepts, which balance each other out.

So, in the face of no or incomplete information, we take our cues from the next human. When Alice makes a decision, she decides yes-or-no; then Bob, who knows nothing of the subject, takes his cue from Alice in a similarly binary fashion, and Carol takes her cue from Bob, and so it spreads, in a cascade effect.

Economists and others rely on this binary herd behavior in their calculations.

But.

The problem is that people don’t always think this way; therefore people don’t have to think this way. Some people seem to have the habit of critical thought at an early age. As well, the very concept of binary thinking seems to fit too neatly into our need to measure. It’s much easier to measure all-or-nothing than shades of gray, so a model that assumes we behave in an all-or-nothing manner can easily be measured, and is therefore more easily accepted within the community of discourse.

Things tend to be more complex than we like to acknowledge. As Stephan Wolfram observed in A New Kind of Science,

One might have thought that with all their successes over the past few centuries the existing sciences would long ago have managed to address the issue of complexity. But in fact they have not. And indeed for the most part they have specifically defined their scope in order to avoid direct contact with it.

Which makes me wonder if binary classification isn’t its own informational cascade. In nearly every situation, there are more than two factors and more than two options.

The tradition of imposing a binary taxonomy our world goes back a long way. Itkonen (2005) speaks about the binary classifications that permeate all mythological reasoning. By presenting different quantities as two aspects of the same concept, they are made more accessible to the listener. By placing them in the concept the storyteller shows their similarities, and uses analogical reasoning to reach the audience.

Philosophy speaks of the law of the excluded middle—something is either this or that, and not an in between—but this is a trick of language. A question that asks for only a yes or no answer does not allow for responses such as both or maybe.

Neurology tells us that neurons either fire or they don’t. But neurons are much more complex than that. From O’Reilly and Munakata’s Computational Explorations in Cognitive Neuroscience (italics from the authors):

In contrast with the discrete boolean logic and binary memory representations of standard computers, the brain is more graded and analog in nature… Neurons integrate information from a large number of different input sources, producing essentially a continuous, real valued number that represents something like the relative strength of these inputs… The neuron then communicates another graded signal (its rate of firing, or activation) to other neurons as a function of this relative strength value. These graded signals can convey something like the extent or degree to which something is true….

Gradedness is critical for all kinds of perceptual and motor phenomena, which deal with continuous underlying values….

Another important aspect of gradedness has to do with the fact that each neuron in the brain receives inputs from many thousands of other neurons. Thus, each individual neuron is not critical to the functioning of any other—instead, neurons contribute as part of a graded overall signal that reflects the number of other neurons contributing (as well as the strength of their individual contribution). This fact gives rise to the phenomenon of graceful degradation, where function degrades “gracefully” with increasing amounts of damage to neural tissue.

So, now we have a clue that binary thinking may be an informational cascade all its own, what do we do about it? That’s a subject for another post.


References

  • Itkonen, E. (2005). Analogy as structure and process: Approaches in linguistics, cognitive psychology and philosophy of science. Amsterdam: John Benjamins Publishing.
  • O’Reilly, R.C., and Y. Munakata. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press.
post

How not to ask questions

From the NY Times, Yes, Running Can Make You High, 27 March 2008:

Yes, some people reported that they felt so good when they exercised that it was as if they had taken mood-altering drugs. But was that feeling real or just a delusion? And even if it was real, what was the feeling supposed to be, and what caused it?

The bit that bothers me is the middle sentence. Unless the NY Times is seriously proposing that all runners reporting a high are either lying or confabulating, a feeling is a feeling is a feeling. It is real, and originates and is reflected in your physical being, just as every aspect of your experience is. The only relevant definition of real in the context of feelings is “real to the individual experiencing them,” and that cannot be questioned. Of course it’s real, they experienced it.

The question What [is] the feeling supposed to be? has the opposite problem. Instead of a universal affirmative (Of course it’s real), this question can have a variety of answers, some or all of which may be correct. It’s a useful question to ask, since it helps us parse the situation from different angles, but it cannot be answered definitively.

The question most able to be answered here, at least on a biological level, is not Is it real?, but What caused it?, asked in the last sentence.

Psychology will not have come into its own until there is a general understanding that there is no difference between mind and body. This is why the talking cure, journaling, and other cognitive therapy works, because they subtly affect—in essence, reprogram—the brain. Our experience occurs in the interaction of our selves with our environment. The challenge in psychology is in defining the parameters of the environment, and the capabilities of the self in reacting to and understanding it.

post

Messy is fun: stepping away from Occam’s Razor

The scientific method is the most popular form of scientific inquiry, because it provides measurable testing of a given hypothesis. This means that once an experiment is performed, whether the results were negative or positive, the foundation on which you are building your understanding is a little more solid, and your perspective a little broader. The only failed experiment is a poorly designed one.

So, how to design a good experiment? The nuts and bolts of a given test will vary according to the need at hand, but before you even go about determining what variable to study, take a step back and look at the context. The context in which you are placing your experiment will determine what you’re looking for and what variables you choose. The more limited the system you’re operating in, the easier your test choices will be, but the more likely you are to miss something useful. Think big. Think complicated. Then narrow things down.

But, some say, simple is good! What about Occam’s razor and the law of parsimony (entities should not be unnecessarily multiplied)?

Occam’s razor is a much-loved approach that helps make judgment calls when no other options are available. It’s an excellent rule of thumb for interpreting uncertain results. Applying Occam’s razor, you can act “as if” and move on to the next question, and go back if it doesn’t work out.

Still, too many people tend to use it to set up the context of the question, unconsciously limiting the kind of question they can ask and limiting the data they can study. It’s okay to do this consciously, by focusing on a simple portion of a larger whole, but not in a knee-jerk fashion because “simple is better.” Precisely because of this, several scientists and mathematicians have suggested anti-razors. These do not necessarily undermine Occam’s razor. Instead, they phrase things in a manner that helps keep you focused on the big picture.

Some responses to Occam’s concept include these:

Einstein: Everything should be as simple as possible, but no simpler.

Leibniz: The variety of beings should not rashly be diminished.

Menger: Entities must not be reduced to the point of inadequacy.

My point is not that Occam’s razor is not a good choice in making many decisions, but that one must be aware that there are alternative views. Like choosing the correct taxonomy in systematics, choosing different, equally valid analytic approaches to understand any given question can radically change the dialogue. In fact, one can think of anti-razors as alternative taxonomies for thought: ones that let you freely think about the messy things, the variables you can’t measure, the different perspectives that change the very language of your studies. You’ll understand your question better, because you’ll think about it more than one way (thank you, Dr. Minsky). And while you’ll need to pick simple situations to test your ideas, the variety and kind of situations you can look at will be greatly expanded.

Plus, messy is fun.


Note: Some of these thoughts sprang from a letter to the editor I posted on Salon.

Cross-posted on UXtraordinary.com.

post

Zombie ideas

In 1974 Robert Kirk wrote about the “zombie idea,” describing the concept that the universe, the circle of life, humanity, and our moment-to-moment existence could all have developed, identically with “particle-for-particle counterparts,” and yet lack feeling and consciousness. The idea is that evolutionally speaking, it is not essential that creatures evolved consciousness or raw feels in order to evolve rules promoting survival and adaptation. Such a world would be a zombie world, acting and reasoning but just not getting it (whatever it is).

I am not writing about Kirk’s idea. (At least, not yet.)

Rather, I’m describing the term in the way it was used in 1998, by four University of Texas Health Science Center doctors, in a paper titled, “Lies, Damned Lies, and Health Care Zombies: Discredited Ideas That Will not Die” (pdf). Here the relevant aspect of the term “zombie” is refusal to die, despite being killed in a reasonable manner. Zombie ideas are discredited concepts that nonetheless continue to be propagated in the culture.

While they (and just today, Paul Krugman) use the term, they don’t explicate in great detail. I thought it might be fun to explore the extent to which a persistent false concept is similar to a zombie.

  • A zombie idea is dead. For the vast majority of the world, the “world is flat” is a dead idea. For a few, though, the “world is flat” virus has caught hold, and this idea persists even in technologically advanced cultures.
  • A zombie idea is contagious. Some economists are fond of the concept of “binary herd behavior.” The idea is that when most people don’t know about a subject, they tend to accept the view of the person who tells them about it; and they tend to do that in an all-or-nothing manner. Then they pass that ignorant acceptance on to the next person, who accepts it just as strongly. (More about the tyranny of the dichotomy later.)So, when we’re children and our parents belong to Political Party X, we may be for Political Party X all the way, even though we may barely know what a political party actually is.
  • A zombie idea is hard to kill. Some zombie viruses are very persistent. For example, most people still believe that height and weight is a good calculator to determine your appropriate calorie intake. Studies, however, repeatedly show that height and weight being equal, other factors can change the body’s response. Poor gut flora, certain bacteria, and even having been slightly overweight in the past can mean that of two people of the same height and weight, one will eat the daily recommended calories and keep their weight steady, and one will need to consume 15% less in order to maintain the status quo. Yet doctors and nutritionists continue to counsel people to use the national guidelines to determine how much to eat.
  • A zombie idea eats your brain. Zombie ideas, being contagious and false, are probably spreading through binary thinking. A part of the brain takes in the data, marks it as correct, and because it works in that all-or-nothing manner, contradictory or different data has a harder time getting the brain’s attention. It eats up a part of brain’s memory, and by requiring more processing power to correct it, eats up your mental processing time as well. It also steals all the useful information you missed because your brain just routed the data right past your awareness, thinking it knew the answer.
  • Zombies are sometimes controlled by a sorcerer, or voodoo bokor. Being prey to zombie ideas leaves you vulnerable. If you have the wrong information, you are more easily manipulated by the more knowledgeable. Knowledge, says Mr. Bacon, is power.
  • Zombies have no higher purpose than to make other zombies. Closely related to the previous point. Even if you are not being manipulated, your decision-making suffers greatly when you are wrongly informed. You are also passing on your wrong information to everyone you talk to about it. Not being able to fulfill your own purposes, you are simply spreading poor data.

So we see that the tendency to irony is not just useful in and of itself, but useful in helping prevent zombie brain infections. As lunchtime is nearly over, and I can’t think of more similarities, I’m stopping here to get something to eat.

[Exit Alex stage right, slouching, mumbling, “Must…eat…brains.”]


Cross-posted in UXtraordinary.

post

Cannot vs. can not

There is a grammatical misunderstanding common to many U.S. Americans, largely because we learned about grammar in the either/or terms of right vs. wrong. Here’s the misunderstanding: can not or cannot? My public school teachers said can not was the correct form, and that cannot was a corruption. A friend of mine from a previous generation was taught the opposite. Her son, much better at using the language than either of us, said both were right, but usage depended on context.

Here’s the explanation: If I can not do something, then I can also do it. I can not write these words if I choose (and you may think I shouldn’t), but I also can, and am, writing them. What I cannot do is know who will read them, or what they will think. I can imagine such things, but I’m limited by my experience and perceptions. So this is the rule: if you either could or could not do something, then you use two words, because you can leave out the second word if you so choose. If you could not do something no matter how much you desired or tried, then you use one word, cannot. There is no other option.

Sometimes both are true. Witness:

I cannot change the world.

I can not change the world.

It’s true, I cannot change the world. What I mean, and what many mean when they say or think this to themselves, is that the world’s problems are too big for any one person, or group of people, to take on. Poverty, sickness, hatred, love, weather, earthquakes, political and religious differences—these are inevitable conditions. Even Jesus said, “the poor you will have with you always,” and, “Let the dead bury the dead.”

It’s also true that I can change the world. I, and every other person on the planet, can make a difference. We can give to the poor, and try to cure ourselves of the sickness of wealth (more on that later). We can be courteous, we can provide emotional (listening) or physical (assisting) or financial (donating) help to others, we can feed and help and forgive each other. (More about forgiveness later, too.) We can take in an abandoned dog or cat and give it love. We can plant a garden. We can put in a day’s work and know we earned our pay, and someone, hopefully, was the better for it. We can not cut off someone in traffic. We can dedicate our lives to healing. We can dedicate our lives to loving our family and community. We can respect the differences of others. In other words, what we can do, we can do.

Grammar is the tool we use to communicate and should be taught as such. Our bodies, our minds, and our voices are the tools we have to interact with our universe. We must use them while we live; we cannot evade using them except through death or dire injury. In this sense we cannot not change the world. And now, while the world suffers on every level, from the sky to the deeps of the sea, from humans to tiny coral polyps, we can make what time we have count.

Don’t berate yourself for previous behavior. Don’t congratulate yourself, either. Just take the next opportunity to make a difference to the next person, and help make what we cannot change bearable.

29 December, 2003


Further discussion

6 January, 2009

Occasionally I get emails from people in response to this, ranging from pleased thanks to detailed explanations of why the option cannot be other than “can not” or “cannot.” Recently one of these linked to English-Test.net, a site dedicated to improving English skills.

I dipped into the site and found a message board with varying perspectives, and replied, signing up as “Logical.” The discussion was fruitful (among other things, I got a nice refresh on modals). Below are some arguments against using both forms in different contexts, along with my response, drawn from this discussion and email exchanges. (Read the English-Test discussion in full.)

  • The two forms mean the same thing, so we should just pick one and use it.

    The point of grammar is to make sense, and making “cannot vs. can not” an either-or situation ignores the logic of the words themselves. They are two different forms, and therefore necessarily mean different things. “Cannot” means it cannot happen at all. There isn’t a “can” option to contrast to it. I cannot go back in time, for example. The reason we don’t have an equivalent “shouldnot” or “mightnot” is because the essence of should and might doesn’t lend itself to this option. “Can,” though, readily implies its absolute opposite.

    “Can not” means it might happen; it can happen, or it can not happen. I can not post this comment if I choose. If you might not do a thing, then you can choose not to do it. So a person can say, with perfect consistency, “I can not do that, therefore I might not do that.”

    The very fact there is such a debate over this should be taken as a symptom that there’s a problem with the either-or scenario. It simply doesn’t make sense to restrict the language artificially, in order to force an illogical rule (whichever rule you learned). If it doesn’t make sense, it’s not good grammar.

  • The scope of the negation is the same in both, because “not” or “-not” belong to the following verb phrase

    Thanks to OxfordBlues on English-Test.Net, because this argument forced me to think things through more deeply.

    The idea is that “can” is apart from the “not _____” portion of the statement, whether in “cannot” or “can not” form. But it seems to me that if “not” is a syllable within the word, rather than a word following it, then it clearly belongs to the word itself, not to a subordinated phrase. This implies “cannot” bears a different meaning from “can not.” The ability of “can” in “can not” to exist without the word “not” implies there is an alternative state to not being able to do a thing, just as the permanency of “-not” in “cannot” implies no alternative.

    OxfordBlues suggested using a version of “be + able” to evaluate the difference in forms. To me, this made sense with “cannot” but not with “can not”, which demonstrated my point:

    David cannot drive. (David lacks the skill set for driving.)

    David is not able to drive. (This accurately describes David’s state.)

    Caroline can not drive. (Caroline could drive, but can choose to let someone else do it, or to walk instead.)
    Caroline is not able to drive. (This doesn’t accurately describe Caroline’s state.)

    Applying “will” options looks like this:

    David will not be able to drive. (Perfectly accurate.)
    Caroline will not be able to drive. (This doesn’t accurately describe Caroline’s state, since she might very well be able to, but choose not to do so.)
    Caroline will not drive. (This only works if it has been decided Caroline will not drive.)

    After much discussion, Bart (my husband, a much more accomplished scholar than I am) suggested the following sentence, which I submitted to English-Test.Net for feedback:

    I cannot not pay my rent and live in my home.

    Alan, the charming co-founder of the site, responded, “This to me suggests that non-payment of the rent is an impossibility for me. Surely in that case ‘can’ and ‘not’ are joined at the hip.”

  • Separating out the not from the word is merely an emphatic form with the same meaning.

    There are two arguments against this:

    • Emphasis is, for the most part, not written down, apart from the occasional bold-faced or italicized rich text formatting, or in eye-dialect. There’s nothing to stop emphasis from being added to either form by a reader or speaker. Interpretation of emphasis is dependent on context and the individual reader or speaker.
    • This is not a rule used in other verbs that I can discover, but a rationalization springing from lack of understanding. For example, the emphatic nature of the sentence, “I will not do that” depends on what is being refused. “I will not take the bus” is quite different from, “I will not murder.” The sentence stands well enough on its own, which is probably why we’ve never developed the form, “I willnot do that.”