Blog Archives

Steno by the Numbers


This article first appeared in the March 2005 issue of the Journal of Court Reporting. Despite being over ten years old, it might be something fun for any court reporters out there who are also math geeks. I know of at least one…

Do you have an analytical mind? Has it ever caused you to look down at your steno machine and wonder how many different strokes there are? How you could measure writing speed? Or how many different ways there were to misstroke “gubernatorial”? Then read on. In addition to having some fun with statistics, we’ll look at a few numbers that just might change the way you think about court reporting.

The Steno Keyboard

If you had a steno keyboard with two keys—call them A and B—you could write four different strokes: A, B, AB, and nothing at all. Since not pressing any keys isn’t really a stroke, we’ll subtract one and call it three possibilities. Add a third key—called C, perhaps—and the number of possible strokes jumps to eight (A, B, C, AB, AC, BC, ABC, and no keys), minus one for that “no keys” possibility. Each key you add doubles the number of strokes. A mathematician would say that a keyboard with K keys could generate 2K-1 different strokes.

A steno keyboard has seven keys on the left and ten on the right, plus an asterisk and four vowels, for a total of 22. Following our pattern from above, your steno machine has 222-1 possible strokes, a total of 4,194,303. If you use the number bar, your possibilities almost double again. I say “almost” because pressing the number bar by itself doesn’t count as a stroke on most machines, so you end up with 223-2 possible different strokes, or 8,388,606.

Obviously, there are certain strokes you can’t physically hit. Do you use both banks (STKPWHR-FRPBLGTS) as a speaker ID, perhaps for “The Court”? Try to hit that stroke with a number bar in it. There may be someone out there with a stroke in their dictionary ending in -TZ (without a -D or -S). If so, they’ll probably email me, but until then, I’ll call that one an impossible stroke, too. I tried to work out all of these impossible (or at least highly improbable) strokes, and I came up with fewer than 10,000 of them. Even if we stretch the definition to include strokes that are possible, but very difficult to hit cleanly (-FBLS, for example), eight million possible different strokes looks like a pretty good estimate.

Misstrokes and Your Dictionary

If there’s a word you frequently misstroke, what’s the easiest way to deal with it? Put the misstroke in your dictionary! In the early days of CAT, you had to watch your dictionary size carefully. Many systems had upper limits on dictionary size, and a large dictionary could dramatically slow down your translation. With today’s computers, massive dictionaries can easily be held in memory, and dictionaries with over 100,000 entries have become routine.

When I taught CAT back in the 1980s, I encouraged my students to keep their dictionaries lean and mean for fast translation and easy editing. Today, a few extra entries won’t hurt anything, and I even recommend adding misstrokes preemptively. When you’re adding a word or phrase to your dictionary, think how you might misstroke it, and put those misstrokes in during your prep. That way, your realtime translation just might come out cleaner. It’s not feasible, however, to add all of the possible ways you might misstroke something.

jcr-numbers1How many ways are there to misstroke something? Let’s take a simple stroke like KATS. You need three fingers for that stroke: left ring finger for the K-, left thumb for the A, and right little finger for the -TS. In figure 1, the green circle (position 5) shows where your right little finger should press for this stroke, and the eight red circles show what happens as that finger goes off-center. If you press at circle 4, for example, you’ll get -LGTS instead of -TS. The nine possible finger positions generate nine possible steno combinations, eight of which are incorrect. It is also possible that your finger doesn’t go down far enough to register at all. That brings us up to ten combinations, nine of which are errors.

Your finger could, of course, be even farther off than the eight “error positions” shown in figure 1, but that’s unlikely and uncommon enough that we don’t need to consider it. What if your finger is halfway between the correct position and one of the positions I’ve shown as an error? Well, either the key will register on the computer, or it won’t. You can’t get half of an error, and since we’re just counting possible errors here, we don’t have to consider that possibility.

jcr-numbers2But what of the K- in KATS? Since the two S- keys are effectively one key, does this reduce the possibilities? No, because each position still generates a different stroke. Position 1 produces ST-, position 4 produces STK-, and position 7 produces SK-. There are still ten possible positions (the nine circles plus “no stroke”).

With your thumb, there are only two keys, and nothing else close enough for a likely error, so instead of ten possible combinations, there are only four (A, O, AO, and no vowel at all).

The grand total, then, for the stroke KATS, is 10 x 10 x 4 = 400 possible ways you could write it. One of those is correct, and one (none of the fingers registering at all) can’t be entered in a dictionary, so there are 398 possible misstrokes. And this doesn’t even factor in the possibility of shadowing with another finger!

Should you enter all 398 of these into your dictionary as misstrokes for cats? Definitely not! Many of those are unlikely (pats, for example), and many others are valid words, such as cat, scat, scats, cot, cots, and cogs. Even with dictionaries of unlimited size, entering all of the possible misstrokes for a word just doesn’t make sense. Instead, enter only the ones that actually happen to you, and don’t conflict with other words.

[Author Note 2015: Most of today’s CAT software includes algorithms for identifying and correcting errors like this automatically. These algorithms are often incorrectly called “artificial intelligence,” but we’ll save that argument for another article!]

Communicating With the Computer

I’ve heard people pondering why it took steno keyboard manufacturers so long to move from the old, slow, serial interface to the new, high-speed, interfaces like USB. The answer? They didn’t need the speed. The only reason for adding USB to a steno keyboard is that so many new computers don’t have serial ports any more.

When computers communicate, they break data into chunks. The basic chunk is called a “bit,” and it’s equal to a binary digit. A bit can represent either one or zero. The next bigger chunk is the “byte,” which has eight bits. It can hold a number from 0 to 255. A steno stroke requires 23 bits—one for each key on the steno keyboard—which means it will fit in three bytes. Many interfaces have extra information such as stenomarks, so four bytes is more typical.

Communication speed for computers is measured in bits per second. For some rather complex reasons involving “start bits” and other obscure telecommunications protocols, it usually takes ten bits rather than eight to transmit a byte over a serial port. If a stroke takes four bytes, then it will take 40 bits to send it over a typical serial line.

If you can sustain a speed of five strokes per second, then you’re really moving. If each stroke takes 40 bits to send, then a steno machine would have to communicate with the computer at 5×40=200 bits per second to handle that blazing five stroke per second speed.

Does your computer have a modem? If so, it’s probably a 56K modem, which can transmit approximately 56,000 bits of information each second over a standard serial line to your computer (I’m oversimplifying here, but that’s close enough for our purposes). That’s over 250 times faster than your steno machine needs to communicate, and your serial port is capable of operating much faster than that. Even today, captioners routinely use old slow modems, because they simply don’t need the speed.

USB is a much better way to download huge picture files from your camera, but it is overkill for your steno machine by at least four orders of magnitude.

How Fast is Fast?

Wouldn’t it be nice to have a speedometer on your steno machine to show you how fast you were writing? You wouldn’t want it to face the attorneys, of course, because they’d take it as a matter of pride if they could “red line” your steno machine! The problem here is figuring out exactly how to measure your writing speed.

jcr-numbers3

On a standard typewriter or computer keyboard, which is called a QWERTY keyboard because of the keys at the top left, people use a very simple measurement for words per minute (wpm). If you assume an average English word is five letters plus a space or punctuation mark, then you can just count how many keys you press in a minute, divide by six, and you have your typing speed in wpm.

It’s a bit more difficult in steno. What do you really want to measure? If it’s the speed of your hands, count strokes. If it’s your final output, count letters or actual words. Schools and speed tests tend to count syllables, figuring that it’s a more consistent (and less theory-dependent) measure of what you’re doing.

Using the chart in Mark Kislingbury’s article, Rev Up Your Writing, in the July/August 2004 edition of the JCR, writing at 200 wpm could mean anywhere from three strokes per second (if you use an incredibly brief-heavy theory like Mark’s) on up to five strokes per second (if you write everything out and come back for all of your inflected endings). For comparison, writing 200 wpm on a QWERTY keyboard would require typing 1,200 keystrokes per minute, or 20 keystrokes per second, a virtually impossible feat.

With the advanced displays available on the latest crop of steno machines, a speedometer showing strokes per second (or strokes per minute) would be easy to add. A words-per-minute speedometer is slightly more complex. It would require a steno machine that translates that could simultaneously count actual words generated per minute. Either type of speedometer could easily be added to a CAT program, as some have already done.

Your Poor, Abused Steno Keyboard

When my brother and I were working on the design of a steno keyboard, we went to see a parts manufacturer about some custom key stem designs. One of the first questions he asked was, “How many times will this key get pressed?” Wow! What a question!

We approached it by looking at worst-case numbers. A court reporter with a busy schedule that reports five days a week writes a whole lot of strokes. A quick survey of reporters showed that the number of strokes written in a full day varies all over the map, but we settled on 75,000 strokes as being a pretty big number. Assuming a couple of weeks of vacation time, that reporter will work 50 weeks in a year, at 5 days per week, for a total of 250 days in a year. Multiply that by our 75,000 stroke day, and we’ve got 18,750,000 strokes per year. Do your wrists hurt yet?

How long does a steno machine last? While we know reporters using machines that are 20+ years old, CAT reporters have a tendency to upgrade more often than that. Even so, you wouldn’t want to buy a machine that wouldn’t last 20 years, would you? Doing the math, that means your steno machine could easily take over a third of a billion (375,000,000) strokes over the course of a 20-year career!

A Career on a Disk

In the early days of CAT, dictionary size limitations often came from what could fit on a floppy diskette. PC-based CAT systems started out with hard drives holding 20 megabytes or so. Backing up was important not only to be safe, but because you just couldn’t fit that much information on a hard drive.

Today, instead of backing up dictionaries on floppies, we can just write them to a blank CD, along with everything else we need to save. Hard drives typically hold over 1,000 times what those early drives held. I was computer game shopping with my son-in-law the other day, and he showed me the newest game in his favorite series. On the box, it said that it requires over 11 gigabytes of free space on your hard drive just to install it!

That, of course, got me thinking. With today’s storage capacities, what would it take to back up a reporter’s entire career? We calculated earlier that a stroke of steno takes four bytes. That means our 375 million strokes would require 1.5 billion bytes. As a side note, a billion bytes is not the same as a gigabyte. The metric prefixes mean something a little different in the computer world, as they go up by factors of 1,024 (two to the tenth power) rather than factors of 1,000 (ten to the third power). This gives us:

1 kilobyte (1Kb) = 1,024 bytes
1 megabyte (1Mb) = 1,024 Kb = 1,048,576 bytes
1 gigabyte (1Gb) = 1,024 Mb = 1,073,741,824 bytes
1 terabyte (1Tb) = 1,024 Gb = 1,099,511,627,776 bytes

[Author Note 2015: For more about large & small numbers, see my article “Billions and Billions: A math lesson for NBC.”]

What about the final transcripts? That depends largely on the format you store them in. In the simplest ASCII format, a 250-page job with typical formatting takes about 375,000 bytes to store (your mileage may vary). Going back to our 250 work days per year over a 20 year span, that’s 1,875,000,000 bytes of ASCII transcript. Add that to our 1.5 billion bytes of steno, and we get 3,375,000,000 bytes, or about 3.14 gigabytes.

The latest computers these days have the option of writable DVD drives, which have a capacity of about 4.7 gigabytes. This means that one single disk is capable of holding every single steno stroke and every final transcript from an entire 20-year reporting career, with extra space to spare for dictionary backups and digital photos of your favorite attorney clients (just joking on that last one).

What about audio linkage? Assuming 8-hour days, that 20-year career would involve a mind-numbing 40,000 hours of listening to people argue. A typical computer audio recording using the .WAV file format takes up almost 40Mb per hour, even for low-quality mono recording. Saving all of that audio would require over 1.5 terabytes, which is vastly more information than a DVD can hold. Of course, Web servers with over a terabyte of disk space aren’t uncommon any more, and compressed audio formats like MP3 can shrink that 40Mb per hour down dramatically while simultaneously increasing the quality of the recording.

In the last 20 years, we’ve gone from floppy diskettes holding 360 Kb to DVD+RWs holding 4.7Gb. That’s over 10,000 times the storage capacity. If the pattern holds, then in another 20 years, we shouldn’t have any problem holding the steno, finished transcripts, and audio of a reporters entire career on a single disk-or perhaps it will look more like a key fob, a wristwatch, or a credit card.

Perhaps all this recreational number crunching has started your mind racing. Perhaps your eyes have glazed over and the only crunching you’re thinking of is Ben & Jerry’s Heath Bar Crunch Ice Cream. Either way, you just may look at your steno keyboard a little differently tomorrow morning.

SIDEBAR: About the Numbers

In Stephen Hawking’s wonderful book, A Brief History of Time, he says that he was told every equation he used would halve his sales. He managed to cover everything from Heisenberg’s uncertainty principle to string theory using only one equation (e=mc2). If he could pull that off, I figured I could do no less with this article. Unfortunately, either steno is more complex than I thought, or Professor Hawking is just a better writer, because I really needed to slip in a few formulas. I trust they won’t slow you down.

To write this article, I had to make some assumptions. One of them is that you’re using a traditional steno keyboard, with one initial S key, one asterisk, and one number bar. If you start adding function keys, splitting the initial S, and breaking up the number bar, everything changes. Users of steno keyboards like the Gemini and the various Digitext machines have far more potential strokes, assuming their CAT software supports all of the capabilities of their writer.

Captioners: Remembering Your Audience


Back in the days before closed captioning was mandated, translated, and legislated, everything was clear and simple: captions were created for deaf and hard-of-hearing (HoH) people. Looking back through the rosy glow of nostalgia, captioners had a goal and worked with like-minded broadcasters and agencies to serve our target audience.

In reality, that model has been changing since before the Television Decoder Circuitry Act was enacted in 1990. Even then, less than half of the country’s 500,000 caption decoders had been sold to people with hearing impairments. Today, the average American is most likely to think of captioning as something one sees in noisy bars, gyms, and airports, but the people who need captioners are the same ones captioning was created for: the deaf and HoH audience.

High-Quality Captioning: A Conundrum

At first blush, a captioner’s goal seems simple: produce high-quality captioning. Unfortunately, that goal has two major problems. First, defining “quality.” Second, answering the critical question, “quality for whom?”

I primarily use captioning to help me keep up with the dialog on TV while the dog is barking, the grandson is talking, the phone is ringing, and the world cat-wrestling championship is taking place on the couch next to me. When I miss a word, I look at the bottom of the screen and there it is! Deaf people aren’t using captioning to fill in a few gaps. They’re using it as a substitute for the audio track. “Quality” to them isn’t the same as it is to me or to you.

When NCRA issues a CBC (Certified Broadcast Captioner) or CRR (Certified Realtime Reporter) certification, they test what’s practical to test: your ability to produce a verbatim – or near-verbatim – voice-to-text product. Getting those words transcribed and onto the screen isn’t the whole job of a captioner, though. Other facets of the captioning matter, too.

The people I interviewed for this article raised some issues that you may not normally consider part of delivering quality captioning, including:

  • Latency: The delay between the word being spoken and appearing in the captions
  • Positioning: Captions covering critical content on the screen
  • Lack of Speaker ID: Not making it clear who is speaking
  • Non-verbal Cues: Sound effects, song identification, and other non-spoken information

Latency

Dana Mulvany of Differing Abilities told me, “Significantly delayed captions end up denying access, particularly when they are cut off by commercials. They also deny access to understanding the facial expressions as well as the words.”

Delays are definitely a big issue. If the captions lag two or three seconds behind the video, it’s pretty easy to follow along and see the broadcast as a unified whole. I timed a national morning news show several times over several broadcasts, and found delays of seven to nine seconds. When watching a fast-paced newscast, it becomes difficult to understand when the captions are that far behind the video. On talk shows, I’ve measured delays of 20 seconds and more. At that point, you’re several jokes behind, and you can lose content as well.

“Delayed captions can get cut off when the program is interrupted by commercials or the end of the program, so they can be highly disruptive,” Mulvaney said.

Digital satellite broadcasts delay the video by several seconds, and DTV transcoding of captions can introduce even more delay.

Philip Bravin, former Chair of the Gallaudet University Board of Trustees, commented, “Sometimes I go back to standard definition just to enjoy captions on news better, because the latency in HDTV captioning is driving me crazy.”

One way that captioners can reduce the delay in the captions is to listen to a direct audio source over the phone instead of pulling audio from a digital broadcast. Additionally, most captioning software allows you to adjust the delay time. Clearly, if the software holds back a line or more of captions, you have more time to correct errors, which makes the caption text more accurate. This, unfortunately, comes at the expense of usability, as the delay makes captions harder to follow.

There’s more to the latency story than that, however, and most of it is out of the captioner’s control. As an example, my wife (freelance stenocaptioner Kathy Robson) was doing a sports broadcast the other day. The client required the captions to be routed to several encoders. This meant that she had to dial in to the captioning firm, which split the signal and routed it to multiple destinations. I stopwatched the delay. From the time the captions left her computer until they appeared on the satellite image we were watching was just under eight seconds. You can do your part, but you can’t fix that problem.

Positioning

I’m not speaking here of purely aesthetic decisions about where to place captions, but of practical positioning decisions that affect the usability and understandability of the captioning. Typically, this means captions covering critical content on the screen.

“[It] drives me nuts when they are captioning something that is written on the screen, like David Letterman’s Top 10 List,” said Tom Willard, Editor of Deafweekly. “Why don’t the captionists look up at the screen and stop captioning when the info is right there on the screen?”

Willard is speaking of a situation where the captions needlessly duplicate what’s on the screen – and sometimes introduce errors in doing so. Back in the old days, a captioner could simply stop writing when the Top 10 List appeared. Today, the caption text is often aggregated to produce searchable video. This means captioners can’t simply stop writing.

“Data mining is just a byproduct, I would think, but the reason there are captions is guys like me,” said Bill Graham, founder of the Association of Late-Deafened Adults (ALDA). As much as the deaf community would like to believe that, broadcasters see it differently, and if the text you attach to the video is a byproduct, it’s a very important one.

There is another placement issue as well, where the captions are covering unrelated, yet still important, information. Television producers do not make this situation easy for captioners. Turn on an NFL broadcast, and you’ll see text and graphics covering nearly a third of the screen. What do you cover with the captions? The score? The other graphics? Or the game itself?

Even when the on-screen graphics consist of a single line of Chyron text, the captions often cover that text instead of bumping up a line or moving to the top of the screen. That text may contain the name and title of a person being interviewed, which isn’t mentioned in the captions. I sometimes find myself pausing the video, backing up, turning off captions, replaying a segment, and then turning captions back on, just so I can see names and titles that the captions were covering.

What can a captioner do about it? Placement is often mandated by the broadcaster, and your only option is to make sure they’re aware of the problem when they don’t leave you a place to put the captions. Most broadcasters have a television monitor somewhere in the studio showing the captions, but that doesn’t mean someone’s watching it.

“I’d guess at most of the stations who have engineers watching captions, they don’t pay too much attention until they have to,” Bill Graham noted.

Speaker Identification

Hearing people can usually tell who is speaking even when we can’t understand what they’re saying. Deaf viewers, however, rely on lip reading and other cues to identify speakers. If the speaker is off-screen or not facing the camera, they rely on the captions for speaker identification.

“I personally am hard of hearing, so I’m able to catch most of the emotional nuances when I’m listening to the TV”, said Mulvany. “I also can catch the facial expressions if I’m not listening to the sound and if the captions are synchronized.” Extreme delays definitely exacerbate the problem. It’s hard to remember whose lips were moving eight seconds ago in a fast-moving show.

There are a lot of reasons not to provide speaker identification when realtiming. It slows you down; sometimes you don’t know who’s speaking; you may not get the names in advance.

All understandable, but there is a middle ground. On a talk show, for example, having speaker IDs for the host and sidekick or bandleader might be enough if you add “Guest” and “Audience Member.” Even following the news convention of starting a line with >> when there is a new speaker would be a big help on many shows.

Mulvany went on to add, “Europeans use color to indicate who is speaking, so if that has been proven to work there, it would seem very useful here, too.” I’ve raised this question with captioners in the past, and met with a great deal of resistance, but I’m not entirely sure why.

Quite some time ago, I was doing some work with the BBC. They assigned colors to each of the anchors on the show, and used white text for everyone else. Once the speaker IDs were properly defined in the captioning software, the entire process was automatic. We’ve had that capability in U.S. captioning software for over 20 years, yet I know of nobody that uses it.

Non-verbal Cues

In the 1970s and 80s, when someone asked me the difference between closed captioning and subtitling, I had two easy differences to point out. The first was that captions could be turned on and off and subtitles couldn’t. The second was that captions included non-verbal cues for deaf/HoH people (e.g., “gunshot” or “footsteps approaching” or “Beethoven’s Fifth Symphony playing softly”).

This seems to have tapered off in recent years, and consumers who don’t understand it may actually complain about it, as we saw in January 2011. President Obama was speaking in Tucson at a memorial service, and someone happened to photograph the captioning on the Jumbotron just when the line [APPLAUSE] appeared. A blogger named Jim Hoft manufactured quite a bit of outrage by claiming that the captioner was asking audience members to applaud rather than indicating that they already were. He was shouted down rather swiftly, but the lesson remains: there are people who don’t understand why non-verbal cues are included in the captions.

Some broadcasters or captioners may be omitting non-verbal cues on purpose, but that’s not always how the deaf viewers see it.

“There just seem to be variations based on how diligent people are about doing their jobs,” said Willard. “I do see shows that give a lot of clues about background noise and others that don’t. Seems to come down to how much they care.”

Sometimes the deaf and HoH audiences ask for things that may not be practical to provide. “I think it’s probably not possible for realtime captioners to provide all the non-verbal information that’s desirable,” Mulvaney said, “but I do think it’s very important to indicate when the tone of voice is sarcastic or ironic.”

Is There an Answer?

The shift in captioning focus isn’t all bad. Bravin noted that, “Captioning has become more or less mainstream, so the deaf and HoH focus is pretty much gone, but it helps force the captioning issue because there s a legal requirement.”

Currently, television stations in the nation’s top 25 markets are required to provide realtime captioning for newscasts, but all other stations can use TelePrompTer-based captioning. Everyone I spoke to in the deaf/HoH communities agreed that upgrading the rest of the nation to realtime would be a great start.

“It’s been decades and I’m used to it, but the captioning of local news is a pain in the neck if you’re not in one of the big markets that requires real-time captioning,” said Tom Willard.

Training more new captioners is another issue. Obviously, the law of supply and demand would indicate that having too many captioners would drive down pay in a market that’s already seen dramatic declines in hourly rates in the last two decades. But consumers are concerned.

“The quality of the captioning is likely to get worse as the demand for captioning grows simply because there are not enough high-quality captioners out there,” Bill Graham commented. Graham isn’t just looking at television, though.

“All these webinars that are proliferating for example: few are captioned,” he continued. “If there is a webinar to help people get ahead in their jobs, what happens is that deaf people get farther behind. This is going to be a BIG problem in the future: news vs. livelihood; entertainment vs. education and jobs.”

And, finally, Willard echoed a common theme when he was speaking of disappearing (prescripted) captions and said, “I really resent that it is my job to be a compliance officer, that it is up to me to have to complain about it to my cable company.”

Bravin agreed: “It’s too much of an hassle to file a complaint, and then with no complaints it’s harder for the FCC to enforce quality.”

Should the FCC be legislating caption quality? Should broadcasters be working with deaf/HoH consumers to improve captioning? Questions like this can’t be resolved by captioners or captioning companies, but being aware of the issues that affect the lives of deaf and hard-of-hearing people can help keep you focused on the people who need you most.

Facebook: A tool for journalists?


Facebook logoAsk anyone what Facebook is, and they’re likely to give the same short, sweet answer: it’s a social networking site. Indeed, that’s its primary use for me these days (once I have all of the games filtered out and ignore the politics and religion), but that’s not its only use.

As an example, I’m working on an article about closed captioning for the Journal of Court Reporting. I needed some interviews for the article, so I sat down to compile a list of people to talk to. I had my email program open on one of my screens, and Facebook open on the other, and it got me thinking. I’ve been fairly diligent about sorting my friends into lists, and I just happen to have a list for friends who are deaf, hard-of-hearing, or work with deaf and hard-of-hearing people.

I went through the list, saying “oh, I need to talk to her” and “I wonder what he’d say about this issue.” I fired off a quick private message to each of the people I wanted to talk to, and started scheduling interview times. In the past, I’ve done a lot of telephone interviews, and a lot of email interviews. I have also done interviews using a variety of chat systems, ranging from CompuServe and IRC online to TDDs (telecommunication devices for the deaf) online, but it’s been quite a while since I’ve done online chat interviews.

Just for kicks, I decided to see how much of the communication for this article I can do using Facebook, just to see how it works out. Obviously, this limited my base of potential interviewees to people I know (or can find) on Facebook. It also slows things down a bit, as typed conversations are slower than oral ones. Here are a few comments, observations, and tips on the process:

  1. Having a verbatim transcript of the interview is handy. During phone interviews, I’m often scrambling to take notes as we talk, and doing it on Facebook chat means I can just cut and paste quotes into the article (however, see #4 below).
  2. The process is much more interactive than an email interview, allowing each question to be tailored based on previous responses. Trying to replicate this in email could stretch the process out for days.
  3. Being able to insert links in the chat is a big help if you want to show the interviewee something and get comments on it.
  4. Using chat introduces an interesting journalistic dilemma. Even careful writers have a tendency to use chat abbreviations (e.g., BTW, OTOH, IIRC) and not worry much about punctuation. When quoting them in the article, should you leave their text as-is, or write it out and re-punctuate it as you would for a phone interview? Hmmm. I think I’ll ask that question on a couple of message boards — or maybe bounce it around on Facebook. I’ll follow up here later.
  5. This could work just as well on Google+, except for the paucity of people on G+ compared to Facebook.
  6. This would be an annoying process on Twitter, worrying all the time about hitting that maximum character count. Some of the Facebook responses were quite long.
  7. The partially-synchronous nature of chat leads to some interesting responses. Often, both of you are typing at the same time (Facebook tries to tell you when the other person’s typing, but that is often flaky). Several times, I typed questions as the interviewees were typing comments that answered my questions. Reading the transcript, it looks like they answered my questions before I asked them!
  8. Sometimes it’s hard to hang back and wait for the other person to finish their thoughts before asking something else, but it pays off if you do!

So, is Facebook a social networking site? Certainly it is. But it’s a lot more these days, too.

Once the article appears in print, I’ll put a copy of it online so you can judge how well the process worked out.

Book signing in Las Vegas on July 29


Closed Captioning HandbookI will be signing copies of The Closed Captioning Handbook at the National Court Reporters Association convention in Las Vegas later this month. If you will be in the area, but aren’t attending the convention, get in touch with me and I’ll make sure you can still get a signed copy of the book.

I will bring copies of some of my Who Pooped in the Park? books as well, for those who won’t be able to attend my Who Pooped? signing the following day in the visitors center at Red Rock Canyon National Conservation Area.

Just for fun, I’m also going to bring along a few copies of The Court Reporter’s Guide to Cyberspace, which I wrote with Richard Sherman way back in 1996. Most of the information in it is out-of-date, but it is a fun and entertaining romp through the history of … well … court reporters in cyberspace. We’ll have some contests or drawings and give those away.

Who Pooped? Red Rock Canyon   Court Reporters Guide to Cyberspace

When: Friday, July 29 from 1:00 to 4:00pm
Where: Near the NCRA Store booth

New and upcoming magazine articles


Acres USA May 2011 coverJust in case you’re keeping track, these are my latest three magazine articles.

  • “Easy Keepers” (Corriente cattle) in the May issue of Acres U.S.A. (out now)
  • “History of Kilts” in the September issue of Renaissance
  • “Internet Caption Delivery” in an upcoming issue of the Journal of Court Reporting
    (the publication of the National Court Reporters Association)
%d bloggers like this: