Blog Archives

Captioning Computer Games


This article was originally published in Oct 1998 as a “Gary Robson on Captioning” column for a magazine called Newswaves for Deaf and Hard-of-Hearing People, which is no longer in publication.


Zork Grand InquisitorWhen movies first appeared in the theaters, they had no sound. A sequence of flickering motion was followed by a slide containing some dialog or narration (gee — captions!), which was then followed by more flickering motion. Deaf and hearing viewers had the same experience at the movies, except for the accompanying music.

When sound became a part of the movie experience, the movies began to rely on the sound to communicate the plot, and deaf people could no longer share the experience. This trend was sealed by the time television took off. It took many years for closed captioning to come along and save the day.

Fast-forward to the computer age

When I started playing computer games in the mid-1970’s, they were mostly textual. If a game had two distinctly different beep sounds, that was pretty impressive audio technology. When playing a text game, a deaf player and a hearing player were on equal footing, because there was anothing to hear. Does this sound familiar so far?

Even as recently as a few years ago, game writers assumed that the majority of the people playing their games would not have computers with speakers, so sound performed a purely ancillary role. Even at that point, it made no difference whether you could hear.

Now, with sound cards for your PC selling for under $20, and virtually all new computers having sound capabilities built in, history is repeating itself. Computer games have critical instructions, tips, and clues in audio form. With many of these games, it is virtually impossible for a deaf player to get past the introduction.

Enter closed captioning

I was recently approached by Activision to test their newest adventure game, Zork Grand Inquisitor. Why me? Because the game is closed captioned! Not just captioned (i.e. subtitled), but closed captioned, meaning that the captions can be turned on and off.

In “normal” play, Zork Grand Inquisitor has a black bar at the bottom of the screen. When you turn on the captions, they appear in this black bar, and faithfully reproduce the dialog and some of the sound effects. In fact, words that are extremely difficult to make out in the audio are clearly visible in the captions, making the captioning a great tool even for the hearing player.

The captions appear as clear, readable colored text on the black background, in upper- and lower-case. They appear “pop-on” style like a captioned video rather than “roll-up” like live news or sports.

In the time I spent playing, I found no significant dialog missing, and only one glitch in the captioning: a particular sentence that flashed on and off the screen before I could read it. The captioning was remarkably well integrated into the flow of the game, and after playing for a few hours, you forget that non-captioned games even exist. This is the way it’s supposed to work!

cc logoOne minor complaint, though. Nowhere on the box do we see the familiar “CC” symbol. How is a deaf person to know that this game (clearly labeled as having “Qsound”) is captioned?

I congratulate Activision for taking the initiative and for doing a good job of implementing captioning in Zork Grand Inquisitor. They’ve set an example that I hope all the other game companies will follow.

Video of my TEDx talk, and a few words about the content


TEDxBozeman header

Everything always seems to take longer than expected, and when my talk hit YouTube, I was out of town on vacation for a couple of weeks. I’m back now, and we can get caught up.

First of all, the talk is on the main TED website, but it’s a bit laborious to find. The primary search doesn’t turn it up (and the Gary Robson that appears isn’t me); you have to look in the TEDx section of the site. I’ll save you the trouble and provide a direct link: go here to watch my talk on TED.com.

I also have a direct link to the talk on YouTube, but I can make it even easier than that: here’s an embedded video with closed captions so you just have to click “play.” I am really excited that over 1,500 people have watched this on YouTube in less than a week!

A word about the captions on this video: The TEDxBozeman video crew hadn’t dealt with web captioning before, and when they sent me test files, I was having trouble getting them to play on my computer for some reason. We started with a clean transcript. My wife, Kathy, is a realtime captioner and she volunteered to create the file for me. I did a bit of editing (not much required; Kathy is a pro!) and then the video crew did the timing and placement. We still have a few glitches with line breaks and positioning, but I hope we can get those cleaned up soon.

My “behind the scenes” post talked about the actual experience of giving the talk, but I’d like to talk a bit now about the content. The message in this video is important to me for many reasons, and everyone who shares this video with their friends helps to spread that message. In a nutshell, the message is this: Captions are a nicety for those of us who can hear; they are a necessity for those who can’t. Certainly hearing people outnumber deaf and hard-of-hearing (HoH) people, but we can’t let the needs of the minority be drowned out by the convenience of the majority.

As I said in the talk, the World Health Organization says that 5% of the world’s population has disabling hearing loss. That’s 360 MILLION people! They need more than just an approximation of the dialog randomly tossed on a screen. Their captions are every bit as important as our audio, and those captions should be properly timed, properly placed, properly spelled, and comprehensive.

I am pleased by the action that the FCC has decided to take. They are moving the right direction, but it’s going to be a difficult move. How do you assign a numeric score to caption quality so that it can be legislated? What’s worse: a misspelled word or a caption covering someone’s face? How far behind can realtime captions be? I don’t envy them the work that’s going to go into legislating quality, but I’ll be happy to jump in and help if they ask. I’ve put a lot of time into questions like that throughout my career.

On TEDx stage with FCC logo

There’s one other thing I’d like to clarify: in no way should my talk be construed as a blanket condemnation of the people performing captioning today. Quite to the contrary, that business is filled with talented, caring people who work their tails off to produce a quality product. Unfortunately, a lot of station executives don’t give captioning the priority it deserves, and the job goes to the lowest bidder rather than the most qualified bidder. A broadcaster can meet the letter of the law today with a captioner who does no preparation, no research, and no post-broadcast QC analysis to improve the next broadcast. This is why realtime captioners earn less than half as much today as they did 20 years ago.

When we do something because we feel that it’s the right thing to do, we want to do it right; when we do something because we’re forced to do it, we’ll often do the least that we can get away with.

Legislation has unfortunately hurt us here, even as it’s helped in many other ways. By forcing everyone to caption, we have increased the quantity of captioning without providing incentive to increase (or even maintain) the quality. It’s good to see that changing.

 

 

Closed Captions, V-Chip, and Other VBI Data


I wrote this article in 1999, before the cutover from analog to digital TV. It first appeared in print in the January 2000 issue of Nuts & Volts Magazine.


Nuts & Volts January 2000There’s more in your TV signal than just video and audio. Closed captions, V-chip information, time of day, program and network information, Internet links, and more lurk within the broadcast signal just waiting for you to pull it all out.

Remember those old TV sets with the “vertical hold” knob? If you turned the knob a bit, you’d see parts of two frames, with a black bar between them. That black bar is the VBI, or vertical blanking interval. It actually consists of the first 21 scan lines of the picture, although what’s contained there is non-picture data and synch signals.

In 1976, the FCC reserved line 21 of the VBI for closed captioning. Since mid-1993, all television sets 13″ and larger have been required to have caption decoders built in. If your TV is newer than that, you can press the “CC” button on your remote or choose “captioning” from your on-screen menu, and watch the program audio rendered as text on the screen.

With the recent proliferation of inexpensive TV tuner cards for computers, you now have an easy way to get this data into your computer and manipulate it.

What’s actually in there

Each picture line in a television signal has two fields. Each field of line 21 contains a single stream of data, containing different types of data packets. The bandwidth was kept very low to make the data as robust as possible, so each field of each frame can contain only two 7-bit characters. Since video is transmitted at 30 frames per second, that gives us 60 characters per second in each field.

Field 1 of line 21 contains two captioning channels (CC1 and CC2) and two “text” channels (TEXT1 and TEXT2). All four of these data channels share that 600 cps data stream, and the information is sorted out using packet headers. Field 2 contains a matching set of data channels (CC3, CC4, TEXT3, and TEXT4), and can also contain extended data services (XDS) packets.

If you consider a “word” to be five characters plus a space, then we have 600 words per minute of bandwidth in field 1. Theoretically, this should be plenty for two caption channels, but when overhead, positioning information, attribute data, and the two text channels are factored in, it may not be enough. On top of all that, dialog comes in bursts, and those bursts are likely to be synchronized to happen on both caption channels at once. For this reason, programs with two caption channels will typically put the second caption stream into CC3, which gives each caption stream its own 600 cps of bandwidth. An example is CBS’ 60 Minutes, which puts English captions in CC1 and Spanish captions in CC3.

The character set used for this data is a slightly modified 7-bit ASCII. All of the standard alphanumeric characters are where you would expect them, but some accented letters have been relocated into the hex 20-7F range. The full character set can be found in my Closed Caption FAQ.

Closed caption and text data

The vast majority of programming in the U.S. uses only CC1 for caption data. Until recently, few programs actually had captions, but the Telecommunications Act of 1996 makes captioning mandatory. As of January 2000, the first milestone in the Telecom Act requires a minimum of 5 hours per day on each channel, and some channels caption much more than that, so there’s plenty of caption data out there.

Although the caption data specification (EIA-608) allows for italicizing, underlining, flashing (blinking), and various foreground colors, the only attribute used with any regularity is italics. Captioners typically use italics to designate an off-screen speaker, a narrator, or a sound effect.

If you’re going to save captions to a text file, then you’ll probably want to do your own wordwrapping. Most closed captioning starts a new line at the end of every sentence, and the 32-character line width is shorter than you’d want for most applications. The typical flag for “change of speaker” is a pair of greater-than symbols at the beginning of a line, which may or may not be followed by a speaker identification.

In most cases, the only service in field 1 with data in it will be CC1. If you aren’t using a card that sorts out the data services for you, there’s an easy way to deal with the raw byte stream in that CC1-only situation. Just take any block of consecutive bytes less than 20h, and replace them with a single space. You can use a simple lookup table to do the substitutions where the character set deviates from US-ASCII.

Caution: If you try this when there’s data in CC2, TEXT1, or TEXT2, it will be interspersed in your CC1 data, and will make everything totally unreadable. Your best bet is to look for a TV tuner card that separates the data services for you.

Closed caption data may be positioned anywhere on the screen, and there can’t be more than four lines (rows) visible at a time. Captions are typically positioned so that nothing critical in the picture will be covered. Text data, on the other hand, is designed to fill the screen (although some televisions limit it to half), completely covering the picture.

Interactive TV and Internet data

Traditionally, the text channels have been used for things like on-screen program guides, but they are rarely used today. The most common use of text is for ITV (Interactive Television) Links.

ITV Links were developed by WebTV and VITAC as a way to transmit Internet URLs (Uniform Resource Locators) in the video signal for set-top boxes. These URLs point to Web pages that contain more information about the program or commercial currently airing. For example, during a program about electronics magazines, the station may insert an ITV link pointing to Nuts & Volts magazine. When that link is broadcast, people with WebTV Plus set-top boxes would see an Internet icon in the corner of their TV screen. They could then press a key on their WebTV controller, and be taken directly to the Nuts & Volts home page, or whatever page was indicated.

The ITV link itself is broadcast in TEXT2, using US-ASCII (ISO-8859-1) characters rather than the modified closed caption character set described above. It consists of a URL enclosed in angle brackets, an optional series of attributes in square brackets, and a checksum.

The only attributes you’re likely to care about if you’re parsing ITV links are the URL and the name of the link. To find the name, scan for the text “[name:” (or just “[n:”), and parse to the next closing square bracket.

The URL field is not limited to only Web addresses, so if you’re using these links, be prepared to deal not only with http links, but with mailto, news, and other link types as well. For example, an ITV link might look like this:

<mailto:gary@robson.org>[t:s][name:Email the Author]
                           [expires:20000521T115959][CE8A]

The [t:s] is a “type” field, the expiration date tells you how long the link will be good (May 25, 2000 at 11:59:59 in this example), and the final [CE8A] is a checksum (see Internet RFC 1071 for details).

XDS data

The extended data services provide information about the current program, TV station, and network. Unlike the caption and text data, they are packets rather than continuous streams of data.

The XDS packet most likely to change the world is the time-of-day packet. VCRs and TVs can use it to set their own clocks, eliminating the “blinking 12:00” phenomenon so common in non-techie households. Other XDS packets include:

  • Name, length, and start time of current show
  • Type of show, based on a set of category codes
  • Program content advisory (see “V-chip data” below)
  • Network name
  • Station name and number
  • National weather service warning codes

To read XDS information, scan the data stream from line 21 field 2. The start code for an XDS packet is a byte less than 0Fh followed by a packet type byte. The end code is a 0Fh byte. As an example, if you wished to set your computer’s clock from the TV signal, you’d scan for a packet starting with 07h 01h, as in Figure 1.

Figure 1

Figure 1: The XDS “Time of Day” packet

There are a few oddities about this packet that need to be explored. First, the seconds. Rather than transmitting a whole byte for the seconds, the Z bit is set to 1 whenever the seconds are zero. This means that setting the time could take as long as a minute, while you wait for the seconds to tick over. You could also just watch for the minute value to change, and use that as your “seconds = 0” indicator. The Z bit allows this process to be stateless.

All times are UTC (also known as GMT). You need to know the time zone you’re in to set your local clock. If the D bit is on, it is daylight savings time.

To set the date, add 1990 to the value of the year bits. Yes, this means the system will break down in 2054, but the broadcast industry expects everyone to be switched over to DTV by then, where this mechanism is different. If the date shows as March 1, but your time zone indicates that your clock should be set a day earlier, you can use the L flag to determine if it is a leap year. If the L flag is on (one), then the date is February 29. Otherwise, it is February 28.

If you wish to decode and interpret these packets further, you’ll want a copy of the EIA-608 specification (see the sidebar, “Where to get more information” below).

AUTHOR NOTE: This information is also available in The Closed Captioning Handbook, available April 2004 from Focal Press

V-chip data

XDS is also how the infamous V-chip gets its data (see the sidebar, “What’s a V-chip?” below). The V-chip spec supports four different rating systems, although any one program can only be rated using one system.

  • MPAA is the rating system you’re used to from the movies (G, PG, PG13, R, NC17, X).
  • US TV Parental Guidelines is the new system developed specifically for V-chip (TV-Y, TV-Y7, TV-G, TV-PG, TV-14, TV-MA).
  • Canadian English is used throughout all of English-speaking Canada.
  • Canadian French is used in Quebec.

Let’s look at the anatomy of a typical V-chip packet. It will always begin with the two-byte pair 01h, 05h. The meaning of the next two bytes varies depending on the rating system. Since the US TV Parental Guidelines is the system you’ll see the most, we’ll use that one. The next two bytes would look like Figure 2:

Figure 2

Figure 2: A V-Chip data packet

The D, V, S, and L bits are flags that further refine the rating. The D flag indicates sexually suggestive dialog, V is violence, S is sexual situations, and L is strong language.

Like all other XDS packets, the parental guidelines packets must end in 0F hex. To put this all together, a program rated TV-PG-V would have a V-chip packet of 01h 05h 48h 64h 0Fh.

What now?

Once you have figured out how to read and interpret this data, what do you do with it? You could:

Make transcripts of your favorite shows. Note that this information is a copyrighted part of the video, and you can’t sell it or post it on your Web site.

Make a smart “TV Agent” that runs in the background and tells you when there’s something interesting to watch (see the sidebar, “Your own TV-watching agent” below).

Track Internet links. When an ITV link appears, automatically feed it to your Web browser so you can see what’s related to your current show.

Set the time on your computer.

Watch for weather alerts in XDS. You could tie this to audible alerts, or even have your computer use the modem to call your pager or cell phone. Don’t rely on getting your alerts here, though, because few stations actually broadcast them.

Collect song lyrics. Not that many music videos are captioned today, but the number is increasing steadily as the Telecommunications Act mandate begins to take effect. Again, be careful of copyright considerations here.

I even found someone who had built a “commercial killer” by detecting patterns in line 21 data that usually indicate the start and end of commercials. His would mute the volume on the TV when it detected a commercial, but yours could do whatever you wish.

Good luck, and have fun mining line 21. If you come up with an interesting application for caption data on your computer, email me and let me know!

SIDEBAR: Where to get more information

If you want to get serious about decoding and interpreting captioning and other line 21 data, you’ll want to pick up the appropriate standards documents. You can get them from

Global Engineering Documents
15 Inverness Way East
Englewood, CO  80112-5776  USA
Phone: 800/854-7179 (U.S. and Canada)
       303/397-7956 (International)
Email: global@ihs.com
Web:   global.ihs.com

Be prepared to pay, though. The base document, EIA-608, is over $100, and there are auxiliary documents you would also need.

The author of this article has written a book called Inside Captioning. It does not have detailed instructions for decoding line 21 data, but it does have extensive information about the industry, the technology, and the history of captions.

AUTHOR NOTE: Inside Captioning is out of print. The new Closed Captioning Handbook is a far more comprehensive guide, filled with technical data right down to the bits and bytes.

SIDEBAR: What’s a V-chip?

Don’t try tearing apart your television set looking for the V-chip. You won’t find it. Even though all televisions must contain a “V-chip” now by law, there really is no such thing.

The data for the V-chip, as the article explains, is simply XDS packets containing parental content advisories. Since the TV must contain circuitry to interpret captioning information in the VBI, the V-chip capabilities were added to the captioning chip.

The V-chip, short for “violence chip,” allows parents to control what shows their children can watch. To use this capability, you set filters on your TV. Depending on the rating system being used, you can get fairly detailed. For example, you might choose to allow anything rated TV-14, unless it contains excessive violence (TV-14-V). The set will then monitor the incoming signal, and if it detects anything rated TV-14-V or higher, the audio, video, and captions will be blocked.

The visible content advisory icon that appears in the corner of your screen at the beginning of a program is not generated by your television, and isn’t dependent on the V-chip data. The V-chip data is also retransmitted constantly so that if you change channels, it will detect the new rating quickly.

SIDEBAR: Your own TV-watching agent

Once your computer can read line 21 data from the VBI, an obvious application would be a program to “watch” a specified channel and notify you when something of interest comes up.

Such a program would require a triggering mechanism, such as recognition of a word or phrase from a keyword list. Make sure your keyword checking is not case-sensitive, as most, but not all, captioning is done in uppercase. You could also trigger on ITV links or XDS data.

Once you’ve defined your triggers, you need to define the action. Do you want the program to notify you, using audible alerts? Do you want it to activate a full-screen TV picture on your computer, with the volume turned up? Do you want it to save you a transcript for later? Turn on a VCR? Send you an email? Page you?

If you’re going to have the program save caption data, make sure you back up a bit from the place where the keyword was recognized, so that you get the whole story in context.

You’ll also need a trigger to turn it off. The easiest way to do this is with a timer. If you do that, you should reset the timer every time a keyword is triggered, so that you’ll get all of a long story. You might also want to be more liberal with your keywords in this second trigger.

For example, if you’re scanning CNN for mention of Apple Computer, you wouldn’t want to use the keyword “apple” as a trigger, or you’d get far too many false hits. Once you’ve triggered on Apple Computer, though, you would probably want any mention of the word “apple” to keep the recorder running.

Behind the scenes: My talk at TEDxBozeman


If you don’t know what a TED talk is, or you don’t know the difference between TED and TEDx, please start by reading my TED post from last November. Okay, you’re back. Good!

I’ve been asked a lot of questions about TEDxBozeman and my talk, and now that it’s over and I have decompressed a bit, I will be happy to answer them. I’ll start by saying that (A) the talk should be on TED.com and YouTube by April 21, (B) yes, my talk will be captioned, and (C) I will post more detail about the talk itself in the next few weeks.

Most of the questions, though, were about the event itself. How does this all work? What goes into a TEDx event?

On stage at TEDxBozeman 2014

On stage at TEDxBozeman 2014.

That nine minutes on stage is the culmination of months of work for me, and the process started much earlier than that for the team that put on the event. For me, it began last October when Ken Fichtler, the co-founder of TEDxBozeman, stopped by my tea bar. I wasn’t there, but he left me a note suggesting that I apply to be a presenter. Obviously, I leaped on the opportunity.

On November 19, a few nail-biting weeks after I submitted my application, the selection committee sent an email saying they’d chosen me as one of their speakers. At that point, I officially committed to do something I’d never done: memorize a speech. I’ve done a lot of public speaking, ranging from educational seminars to emceeing live events. In every single instance, I’ve had notes.

I’m good at following an outline. My speaking style, however, is like my father’s.

“Anyone who tells a joke or story the same way twice is just plain lazy.”
— Dad

He always said that a successful speaker or storyteller needs to be constantly reading the audience and adjusting the speech, and that’s what I learned to do. My notes keep me on track and I improvise the words. That isn’t the way things work in TED talks.

I first recorded myself giving the talk in early February, and sent the video in for reviews. For the most part, they were kind, but there was consensus on a couple of issues:

  1. At 15 minutes, the talk was too long.
  2. I had too many facts and figures. One reviewer actually said I sounded too much like a textbook or a Wikipedia page. Ouch. I did write the textbook on the subject, but that’s most emphatically not what my talk was supposed to sound like.

I went to work on cutting and restructuring the talk. And just as I felt good about it, the rug was pulled out from under me. The FCC unanimously voted to implement new quality standards for captioning. I had one weekend to rip out my entire lecture about why the FCC should be doing this and instead focus on what they were doing.

Talking about quality

I was using the word “quality” to talk about captions here. I could just as easily have been talking about the staff that put this event on. They were an amazing group!

I arrived in Bozeman two days before the event.

My handler

I’d like to think I don’t need much handling, but my handler certainly was helpful in making sure I was in the right places at the right times.

I should note at this point that TEDxBozeman is put on entirely by volunteers. Dozens of people donated their time to do staging, sound, video, check-in, decoration, and more. Even our handlers were volunteers. Yes, we had handlers! The TEDx speakers are not paid for this. We volunteered our time as well. They did, however, provide hotel rooms for those of us coming in from out of town, and fed us a couple of times as well. That was much appreciated.

Wednesday night was a presenter dinner. We all had an opportunity to meet each other — I had talked to everyone on video chat, but we hadn’t met face-to-face — and to meet the organizers.

The lineup of speakers for TEDxBozeman 2014 was downright intimidating. At one point, I was talking with several of them and realized I was the only one in the group without a Ph.D. I felt like Wolowitz on the Big Bang Theory, but at least he has a Masters degree. I don’t even have that!

On the other hand, many of them were speaking about subjects that really interested me. Mary Schweitzer’s talk about paleontology and studying dinosaur proteins. Rebecca Watters’ talk about wolverines. Molly Cross’ talk about climate change. We were seated at three tables, so I didn’t get a chance to talk with everyone, but I sure liked what I was hearing.

Presenter dinner

Our emcee, Paul Anderson, describes the ritual dismemberment of speakers who forget their lines. Actually, he was telling a joke, but it sure looks like he was talking about ritual dismemberment.

The organizers then gave us a little pep talk. It helped that Paul Anderson, the emcee, had done a TEDxBozeman talk himself a couple of years ago, so he was able to tell us what to expect. After dinner, I headed back to the hotel and rehearsed a few more times in front of the mirror.

Thursday was dress rehearsal day, and I got my first glimpse of the venue. Wow! The decorations and sets weren’t fully assembled yet, but I could already tell it was going to look great. For the first time, we got miked up and climbed up on stage to do a live run-through. I watched the person before me do her talk, but I didn’t really see it. This was all starting to sink in.

I started the dress rehearsal by getting about 30 seconds into the talk and having my video not work. We took a break and they figured everything out. We started again, and I must have gotten out two whole sentences before my mind went completely and utterly blank. I just stood there. The third time was a charm, however, and we made it all the way through.

Then the lighting guys came up and said that my hat was going to be an issue. It cast a shadow over my eyes. The speaker coordinator, Maddie Cebuhar, said maybe I just shouldn’t wear it. Three people said, “Oh, no. He has to wear that hat. We’ll make this work.” What we ended up deciding was that I’d tilt the hat back, and then pay attention on stage. If the lights weren’t in my eyes, I had to lift my head or tilt the hat more.

Holding the sign

They really should expect that if they have pieces of the set laying around, someone like me will pick them up and play with them. That thing’s metal, by the way. It’s quite heavy.

After my dress rehearsal, I went back to the hotel. I was pumped full of adrenaline. The email from Maddie didn’t help. She said, “Just in case there are any technical difficulties in getting your [PowerPoint] started, please be ready with a casual filler (so that you are not just standing awkwardly on stage).” I prepared a joke:

“An astrophysicist, an entrepreneur, and a wolverine expert walk into a bar. The bartender says, ‘What is this? Some kind of TED talk?'”

Then I sat there thinking about how I’d cover if the video equipment caught fire. Then I called my wife. She talked me down and told me to go clear my head, so I went to the Museum of the Rockies, where I ended up seeing an exhibit about a dinosaur dig one of the presenters worked on. Cool! I also went to the show at the planetarium. I read a book for a while, met a friend for a beer, and went back to the room to rehearse a few more times.

Friday. The big day. We had a speaker room where they fed us burritos (Yay! Beans for the presenters!) and gave us the big pre-show pep talk. I walked into the room where we’d rehearsed the day before. The sets were done. The stage was together. The cameras were set up. And the crowd was filing in.

The crowd

A sold-out room with over 500 attendees. I have no idea how many more were watching the live stream online, or how many more will eventually watch these talks on TED.com.

Our talks were divided into three sessions. I sat with my wife and daughter and watched the first set. It opened with ceremonial Native American drumming and singing by the Bobcat Singers, a TED video, and some very professional speakers that had obviously done this kind of thing a million times before (Michelle Larson and Greg Gianforte). My mind was fuzzed out by all of the adrenaline. I’ll have to watch those again because I don’t remember them.

I paid attention to Mary Schweitzer’s talk because it interested me, we watched another video, and then “Basement Jazz” closed the first session with spontaneous jazz/funk improvisation. We went into the first break, and I went to do one last dry run before we started. My handler tracked me down in a dark room where I was practicing and dragged me over to the “speaker corner” so they could wire me up for sound. I wandered back into a storage room (telling him where I was going this time) and practiced some more. When our session started, I stood and watched the first speaker (Carmen McSpadden), but I really wasn’t seeing or hearing her. I was up next.

Do I look calm?

Do I look calm? Because I’m not!

Everyone told me later that I looked calm and cool when I presented, but I certainly didn’t feel that way at the time. All I could think of was forgetting my talk. Or the video equipment bursting into flame. This is where a hundred practice runs took over. I got through the first few minutes, realized it was all flowing, and just let it go. The nervous energy turned into passion. Next think I knew, the talk was over and I was headed offstage. They got my microphone off and I watched the next talk: Rebecca Watters’ wolverine presentation was as fascinating as I had expected. Then, I got my mind blown.

Theo Bennett is a high school senior, and he gave one of the most stirring, emotional, passionate, inspirational talks I’ve ever heard. Half of the audience was in tears. He got the only standing ovation of the day, and boy, did he deserve it. Later, I tracked down Maddie and thanked her profusely for not making me follow Theo.

From that point on, I was able to relax and really watch everyone else’s talks. I hope that if you’ve read this far in my ridiculously long blog post, you’re interested enough to watch all of these when the videos are released.

Tate Chamberlin’s “experiential remix” was unique and stirring. He told his whole story to music, and it was quite a story. I’m glad I didn’t have to follow him, either.

Molly Cross has a very different point of view about climate change. It’s best summed up as “let’s take the things we can’t change and figure out how to get excited about them.” It really made me look at climate change differently.

The rest of the third session, with the exception of the musical performances by Josh Powell and the Bobcat Singers, centered on technology. I want to stay in touch with Craig Beals to see how his simple “how are you?” questions to his students develop down the road. Graham Austin made me think about my store and how people connect to it. And I’ll definitely be staying in touch with Rob Irizarry about CodeMontana and teaching children about computers.

By the end of the day, I was really ready for a beer. They served us a wonderful dinner, and we mixed and chatted. I went back to the hotel room, fully intending to head out to the after party (Tate told me all the cool kids would be there), but once I walked into the room I realized I was completely exhausted. By the time the after party started, I was already asleep.

TEDxBozeman inspired me to push my boundaries. It introduced me to a lot of people I hope will turn into friends. It crossed an item off of my bucket list. And I think it made me a better person.

TEDx Talk Details. Vague details, but details nonetheless.


When I first wrote about the talk I’m giving at TEDxBozeman, there wasn’t really a lot to say about it. My application had just been accepted. The details weren’t nailed down. The lineup hadn’t been posted. Things were quite preliminary.

TEDxBozeman logo

Today, I know quite a bit more, but I’m not allowed to ruin the surprise. The TEDxBozeman website has bios for all of us, but it still doesn’t list details about our talks. We are, in fact, forbidden to publish our slides or outlines beforehand. But there are a few things I can tell you!

The event

Tickets are sold out. If you haven’t purchased a ticket yet, you’re out of luck. You can, however, still be a virtual attendee. All of TEDxBozeman will be streamed live on Livestream next Friday, March 21, 2014. The event link is http://new.livestream.com/tedx/events/2814001. The theme is “Pioneer Spirit.” The schedule is approximate, as this is a live event, and nothing ever goes as planned, so I can’t tell you the length or start time of any given talk. Here’s what do know:

TEDxBozeman begins at 1:00 p.m. Mountain Time on Friday, March 21. If you’re connecting to the stream online, do it early.

My talk will begin at approximately 2:50. If you wish to watch, I recommend connecting at least ten minutes before that, just to be safe.

My TEDx talk

The title of my talk is, “Does Closed Captioning Still Serve Deaf People?” In the talk, I will briefly explore the history and development of closed captioning for deaf and hard-of-hearing people and look at where it’s heading. For more details, you’ll have to tune in and listen!

TEDx Cover Slide

My cover slide might look something like this.
But then again, it might not!

If you don’t have a ticket to the event and you’re unable to connect to the live stream, fear not! You’ll be able to find my talk, along with the others from TEDxBozeman, on TED.com at some point. When it’s there, I’ll make sure and post the details here.

I will drop one teaser about the content. The FCC made a new ruling about captioning quality last month. It necessitated a number of changes to my talk.

Accessibility

Well, this is a bit embarrassing. I am speaking about accessibility, and my talk will not be closed captioned live. I just couldn’t get things worked out. I promise you, however, that I will do everything in my power to make sure that when it hits TED.com and YouTube, there will be captions on it!

Books!

Closed Captioning HandbookOne of my favorite independent bookstores, Country Bookshelf in Bozeman, will be selling books at the event, and each presenter was allowed to choose one book: our own if we’ve written one, or someone else’s if it inspired us. I chose The Closed Captioning Handbook (duh), but fate — and my publisher — seem to have worked against me.

When The Closed Captioning Handbook became a textbook, the price shot up. The publisher, Focal Press, has made the book available through the mainstream distributors that bookstores buy from, but it is nonreturnable. This basically means that no bookstore other than a campus bookstore or specialty broadcast industry bookstore would ever stock it. Understandably, Country Bookshelf doesn’t want the possibility of ending up stuck with a stack of unsold $75.00 textbooks after the event.

Never one to give up an opportunity, I came up with an alternative. If we can’t have my primary closed captioning book for sale at the event, we’ll use one of my other books. And so, my friends, even though I’ll be talking about closed captioning, the Gary Robson book at TEDxBozeman will be the Yellowstone National Park edition of Who Pooped in the Park?, because poop books are always appropriate, right?

If you do wish to buy a copy of The Closed Captioning Handbook, Country Bookshelf can order one for you. If you don’t live near Bozeman and won’t be attending the event, you can order one from my store, Red Lodge Books & Tea. If you buy a Who Pooped book at the event, catch me afterward and I’ll be happy to sign it.

%d bloggers like this: