TweetFollow Us on Twitter

Speaking to Software

Volume Number: 16 (2000)
Issue Number: 9
Column Tag: Speech Recognition

Speaking to Your Software

by Erik Sea

Making your application work well with IBM ViaVoice Enhanced Edition 2.0

What Can I Say?

It's here! Talking to machines and having them respond and react has been the stuff of science fiction for decades. The promise has been so long in coming that the release of ViaVoice Millennium for Mac last year seemed to take some people by surprise - many a passerby at MacWorld San Francisco was astonished by the speed and accuracy of the system, even in noisy showfloor conditions. Nonetheless, the combination of computational power and algorithm design has finally produced speech recognition software for the Mac that permit routine and productive use, especially as fast, new copper IBM PowerPC chips find their way into more and more Macs.

ViaVoice Millennium, the first release, was a low-end product, providing dictation into a single application, SpeakPad, and non-customizable transfer scripts. Good for basic dictation, with a large, extensible vocabulary, dictation macros, and AppleScript support. ViaVoice Enhanced builds on this capability, adding new features such as direct dictation into selected applications and allowing customization of "built-in" functions through AppleScript.

"Aha!" you say - "Direct dictation into selected applications, but what if I'm not among the 'selected' few?" Fair enough - IBM can only test and support a few high-profile programs (although the development team is always interested in testing new software for compatibility, particularly games). However, the ViaVoice software doesn't prevent dictation into any application and, in many cases, the Mac OS and ViaVoice extensions that ship with our software are all you need - your application may already support dictation and correction without you writing a single line of code!

Probably, though, you should write a line or two of code. This is essential for maintaining the awe and admiration from your employer, and I know that you really do want to anyway.

ViaVoice Speech Technology

But, before we write code, let's talk about speech. Or speak about talk, and how the ViaVoice engine decides what words it thinks you uttered.

Unlike earlier "discrete" speech recognition systems, which ...required ... distinct ... pauses ... between... words, ViaVoice works with "continuous" speech, with no unnatural breaks between words. In consumer products, we're not quite to the stage where you can have conversations with your computer, or even record or transcribe a speech or a meeting, but for one person, sitting at a computer, speaking clearly and providing cues such as punctuation and formatting, recognition is really quite good. In any case, there are other technologies that will need some work before you can say, "Tea, Earl Grey, Hot" and get what you would like.

Training

Recognition accuracy is also improved by training, which allows ViaVoice to construct a mathematical picture of your voice, which it can then use with its models.

The user reads a prepared story is read to the system to train it - the system knows what the words are, and what they ought to sound like. By comparing these sounds to actual sounds, a difference can be calculated down to the individual sounds that make up a word. For example, suppose when I talk I make my Ts sound like Ds much of the time: "butter" sounds more like "budder" when I say it, but someone else my very crisply say "butter".

Once this picture of my voice exists, ViaVoice can predict with some level of accuracy how I might say a specific word.

Vocabularies and Language Models

The ViaVoice vocabularies (sometimes called "dictionaries") and Language Models are basically large databases of word pronunciations and word positions relative to other words, respectively. You can add your own words to the vocabulary, and teach ViaVoice what they sound like - an extension to the spell dictionary sort of operation you may be familiar with for wordprocessors.

Language models are a bit trickier to explain. Start with a large number of typical dictation documents, feed them into a shredder, and out pops a language model at the other end. Well, maybe not a shredder. The secret is they are dissected by elves. Honest.

However they are created, you can think of a language model as collections of trigram (comprising three consecutive words). The probability that a given spoken word really is a specific written word is influenced by the word around it, and trigrams capture this relationship. By no means should you equate "language model" with "grammar" - they are not at all the same, as grammars reflect complex usage rules that are difficult even for most humans to apply correctly all the time.

To illustrate further, consider the sentence "Please write to Mr. Wright right away." We hand this to the elves, who produce trigrams along the lines of those shown in Figure 1. Now, assuming that you don't pronounce "write", "Wright", and "right" in noticeably different ways, how does this system figure out which sounds correspond to which words? The trigrams from this sentence, combined with other trigrams from other sentences, end up with results like "If it's preceded by 'Mr.', it's most likely 'Wright'", and so on. ViaVoice is very good at this kind of thing and, where it makes mistakes, it can even learn not to make them in the future through correction, which improves accuracy.


Figure 1. Basic Trigram Construction.

By the way, you might want to take a moment to consider the word "to" in the Figure 1 example. How does ViaVoice know the difference between "to" and "two" and "too" when they all sound the same? The answer, once again, is by context, as captured in the trigrams!

The more you use it, the better it gets

"I find it interesting that the software is learning about me while I am learning about it."

This customer remark is based on the realization that, as you use ViaVoice, it actually continues to improve your voice model. For example, if you add words that the system doesn't know, those get added to the model. If you make corrections using the correction window (See Figure 2), those corrections get applied to the voice model.


Figure 12. Correction Window - if your associate is actually Mr. "Right".

Also, if you have text documents containing phrases, words, and names that you expect to dictate often, you can analyze those documents, which will further update the voice models. ViaVoice Enhanced has extra facilities that integrate the document analysis features into dictation, so that you don't need to store up documents and run the analysis program yourself.

It is not unusual to hear of accuracy improving from 95% to 98% with persistent use.

Speaking of Japanese

And now, on to the code, and how ViaVoice adopts and extends the Mac OS to bring you dictation as seamlessly as Japanese.

The Mac was probably the first platform to make internationalization a key design objective. As a result, many components of Mac OS, including almost all toolbox functions, are ready-made compatible with other languages and script systems.

You've probably all dealt with localization issues before, ranging from not hardcoding strings to not making assumptions about the size of the label on a button when translated. Some of you have no doubt dealt with the multibyte matters arising from making a product work correctly with Japanese, ranging from not being able to make byte = character assumptions to working nicely with Input Methods.

With the distribution of Mac OS 9, it became easier for any Mac user to install multiple input methods, for languages such as Japanese or Korean or different eastern European script systems as well as the "default" (typically Roman) keyboard system. Previously, while such capabilities were available, input methods were not widely used outside of countries where they were absolutely required.

An input method traditionally allows you to enter text in a different script system by typing a few characters, pulling up palettes based on those characters, and selecting similar symbols from those palettes.

There are two forms of input associated with an input method: inline and bottomline, shown in Figures 3 and 4 respectively. Inline input is generally preferred by users; bottomline input requires you to enter data in one place and have it show up in another place, in another font. Bottomline input also lacks the ability to go back and edit later.


Figure 3. Direct inline support in Japanese.


Figure 4. Japanese bottomline input (text entered in lower window).

In designing ViaVoice Enhanced direct dictation, we decided that we could use the input method architecture developed by Apple for non-Roman languages like Japanese, and use it for word-based input and correction. We did end up needing to extend the model slightly, as I'll describe later, but we did it in the background so that, in some circumstances, if you do the work to allow Japanese inline input, you also get ViaVoice speech inline input and correction for free!

TSM, TSMTE, IBM VV & U

These acronyms represent the key players in IBM ViaVoice dictation into any application.

Introduced way back in Mac OS 7.1, the Text Services Manager (TSM) has provided functionality for other languages (primarily multibyte languages) for years. With TSM, a savvy developer could write a few lines of initialization code, and then install 4 Apple event handlers that, much like AppleScript, performed operations like inserting text, showing and hiding the bottomline input window, and telling the input method where a given text offset was. TSM is well-documented in Inside Macintosh: Text, forming all of chapter 8.

Later, Apple introduced the Text Services Manager for TextEdit (TSMTE), which eliminated the need to write any of the event handling code in applications that only used TextEdit - from this point, it was only necessary to initialize the manager and let TSMTE handle the rest. This functionality is well documented in Apple's Tech Note TE27.

Full inline input was not achieved until a new fifth Apple event was added. This is the GetText or 'gtxt' event, and it is only documented in a develop article by Tague Griffith (issue 29), or in Apple's Tech Note 10002 (which is only available in Japanese).

For speech, we determined that the above was not enough. While we could have gone our own way with a completely different model and then tried to sell it to developers, we decided to, instead, augment the existing TSM calls, by adding a couple of extra parameters to GetText, and adding another event which we call SetSelection. With just these two changes, we have the necessary and sufficient conditions for dictation and correction. Sure, we could do more with more events (and, may extend the system to enhance functionality in the future), but you're busy trying to figure out how to get your software to run under Carbon, so we thought we'd cut you a break! Oh, and as you may have inferred, if you've relied on TSMTE for Inline Support in Japanese, the changes to GetText and the addition of SetSelection is done for you automatically. We'll talk about these additions later when I present the implementation code.

If you don't use TextEdit exclusively, you cannot rely on TSMTE for Japanese input, and you will likewise need to do the work of handling the calls and adding the parameters yourself. But, even so, if you've done the work for Japanese (Japan being the second largest Mac market in the world), the incremental work for adding ViaVoice support is bordering on the trivial! As I write this, it may also be necessary to write these handlers for Mac OS X, whether or not you use TextEdit. By the time you read this, we'll have a better idea of what the story is. Either way, the solution is not difficult, it's just a matter of not knowing which path will be required for Mac OS X.

Adding the Ears

By now, you're probably frothing at the prospect of dictation-enabling your code. Let's get right to it. In Listing 1, you'll see how to enable TSM as part of your startup retinue and disable it as part of quitting (Carbon Applications don't need to do this - the OS does it for you automatically). While you're enabling TSM, know that while you must set the "high level event aware" bit in the SIZE resource, you do not need to set the bit "Use text edit services", because it is deprecated (relates actually to an earlier implementation prior to TSMTE).

Listing 1: Becoming TSM Aware

Determining if TSM is available, initializing it, and cleaning up

This is more or less boilerplate code that is required for any application. Note that, under Carbon, you do not even need to make these calls, as the system will do them for you - that is, any Carbon application is TSM-aware without any special calls!


Boolean IsTSMAvailable (void) {
   
   SInt32         version;
   Boolean      available   = false;

   // Note: gestaltTSMgrAttr is not defined under Mac OS 9
   // so we use the gestaltTSMgrVersion selector instead...

   if (noErr == Gestalt (gestaltTSMgrVersion, &version)) {
      if (version >= gestaltTSMgr15) {
         available = true;
      } // if
   } // if
   
   return available;

} // IsTSMAvailable

void StartTSM (void) {

   // Initialize TSM, and install our event handlers...

   OSErr      err       = noErr;

   if (IsTSMAvailable ()) {
   
      #if TARGET_API_MAC_OS8
         err = InitTSMAwareApplication ();
      #endif

      // Install TSM event handlers here - see later section

   } // if   
   gFontForce = GetScriptManagerVariable (smFontForce);
   SetScriptManagerVariable (smFontForce, false);

} // StartTSM

void CloseTSM (void) {

   // Clean up all TSM things, including our event handlers...

   SetScriptManagerVariable (smFontForce, gFontForce);

   if (IsTSMAvailable ()) {
   
      #if TARGET_API_MAC_OS8
         (void) CloseTSMAwareApplication ();
      #endif
      
      // Remove AE handlers here...

   } // if   

} // CloseTSM


You'll notice that there is no special "is ViaVoice installed" code. Again, because we're an input method, the code you write works whether we're installed or not!

If you're only using TextEdit and dialog boxes, set the refcon field of your dialogs to kTSMTEInterfaceType and you're done. Go talk to your computer for a while. Tell your friends/family/coworkers I said it was OK.

Beyond TextEdit Support

Although many applications can live with just TextEdit, the 32K limit, among other things, lead many people to roll their own or use a third-party code library such as WASTE (although now Apple has made a special MLTE - multi-lingual text edit - available). These will require implementation of five Apple event handlers. The complexity of these handlers, naturally, depends on how your code is laid out, but in general, you're just providing an external API to functions or data that you already have written. And you need to do 98% of this for Japanese anyway, so why not squeeze in the 2% for speech? I'll even write the code for you. Alternatively, you can use WASTE (WorldScript-Aware Styled Text Engine, by Marco Piovanelli) which is available in source code form all over the Web, and does much of the work (WASTE would need to be adjusted slightly to handle some of the modified events described here).

But before we delve into the handlers themselves, some TSM terminology. The basics of TSM are discussed in chapter 7 of Inside Macintosh: Text, and also in Tague Griffith's article in develop 29 (see links section), so I will gloss over most of that - the code is pretty self explanatory when read with those references in hand. Ironically, Tague even suggests that input methods might one day be used for dictation input!

TSM keeps track of things on a document basis, where a document is a unique editable area of text. You'll need to add some extra handling to your event loop, so that TSM gets a crack at events it may need to intercept with TSMEvent(), and give TSM first crack at menu events with TSMMenuSelect() (you may have noticed that input methods typically have menus - the ViaVoice input method does not have a menu, but you do want to support Japanese too, right?). As well, when your TSM-aware documents become active and inactive, you need to tell TSM with ActivateTSMDocument() and DeactivateTSMDocument().

Input methods also might like to change the cursor (currently, the ViaVoice input method does not do so, but others do and ViaVoice may in the future), so in your cursor-management routines, or at idle time, call SetTSMCursor(). For this to work, your mouse-moved region (the final parameter to WaitNextEvent() which most people lazily set to NULL) needs to be a single point - since you have no way of knowing when TSM wants to change the cursor.

OK, that was pretty fast, but as it's been written before in the references above, I didn't want to repeat it. You can look at the sample application that comes with this article if you're lost.

The key part to dictation-enablement is the Apple event handlers. These handlers need to get installed for the input methods, including ViaVoice, to be able to extract information from your document content. ViaVoice doesn't vary much from the standard architecture, and I will highlight the differences.

Position to Offset Event

This event converts screen coordinates into an offset in your document. You receive a point, and return an offset. ViaVoice does not currently use this event.

OSErr
DoPos2Offset (DialogPtr inDialog, const AppleEvent*
                     inAppleEvent, AppleEvent* outReply) {

   Size                  actualSize;
   DescType            actualType;
   OSErr               err            = noErr;
   Boolean            dragging      = false;
   Point               currentPoint;
   SInt32               offset;
   SInt16               where;
   DialogItemType   dialogType;
   Handle               dialogHandle;
   Rect                  dialogBounds;
   GrafPtr            svPort;
   
   GetPort (&svPort);
   #if TARGET_API_MAC_OS8
      SetPort (inDialog);
   #else
      SetPortDialogPort (inDialog);
   #endif
   
   // Required parameter is a point...
   
   if (err == noErr) {
      err = AEGetParamPtr (inAppleEvent, keyAECurrentPoint,
                  typeQDPoint, &actualType, &currentPoint, 
                  sizeof (currentPoint), &actualSize);
   } // if
   
   // Optional parameter is for dragging...
   
   if (err == noErr) {
      (void) AEGetParamPtr (inAppleEvent, keyAEDragging,
                     typeBoolean, &actualType, &dragging, 
                     sizeof (dragging), &actualSize);
   } // if

   // Now, we should do all sorts of calculations, but,
   // TextEdit will more or less do this for us once we
   // figure out if it's in the right place...
   
   GlobalToLocal (&currentPoint);
   GetDialogItem (inDialog, kEditTextDialogItem, 
                        &dialogType, &dialogHandle, &dialogBounds);
   if (PtInRect (currentPoint, &dialogBounds)) {
      TEHandle      dialogTE = GetDialogTEHandle (inDialog);
      offset = TEGetOffset (currentPoint, dialogTE);
      where = kTSMInsideOfActiveInputArea;
   } else {
      where = kTSMInsideOfActiveInputArea;
   } // if

   // Stuff the return values here

   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAEOffset, 
               typeLongInteger, &offset, sizeof (offset));
   } // if
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAERegionClass, 
               typeShortInteger, &where, sizeof (where));
   } // if
   
   SetPort (svPort);
   
   return err;

} // DoPos2Offset



Offset to Position Event

The reverse of Position to Offset: return a global point given a text offset. ViaVoice does not currently use this event.

OSErr
DoOffset2Pos (DialogPtr inDialog, const AppleEvent*
         inAppleEvent, AppleEvent* outReply) {

   Size                  actualSize;
   DescType            actualType;
   OSErr               err            = noErr;
   SInt32               offset;
   GrafPtr            svPort;
   Point               thePoint;
   TEHandle            teHandle      = GetDialogTEHandle (inDialog);
   Rect                  bounds;
   
   GetPort (&svPort);
   #if TARGET_API_MAC_OS8
      SetPort (inDialog);
      bounds = inDialog->portRect;
   #else
      SetPortDialogPort (inDialog);
      GetPortBounds (GetDialogPort (inDialog), &bounds);
   #endif

   // Required parameter is an offset position...
   
   if (err == noErr) {
      err = AEGetParamPtr (inAppleEvent, keyAEOffset,
                  typeLongInteger, &actualType, &offset, 
                  sizeof (offset), &actualSize);
   } // if
   
   // Convert the offset to a position, taking into 
   // account whether it's visible or not...
   
   if (err == noErr) {
      thePoint = TEGetPoint (offset, teHandle);
      if ((offset < 0) && (offset > (**teHandle).teLength)) {
         err = errOffsetInvalid;
      } else if (PtInRect (thePoint, &bounds)) {
         err = errOffsetIsOutsideOfView;
      } // if
   } // if

   // Return the point (in global coordinates), and
   // the parameters of the text...

   if (err == noErr) {
      LocalToGlobal (&thePoint);
   } // if
   
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAEPoint, typeQDPoint,
               &thePoint, sizeof (thePoint));
   } // if
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAETextFont, 
                  typeLongInteger, &(**teHandle).txFont, 
                  sizeof (SInt32));
   } // if
   if (err == noErr) {
      Fixed theFixed = Long2Fix((**teHandle).txSize);
      err = AEPutParamPtr (outReply, keyAETextPointSize, 
                  typeFixed, &theFixed, sizeof (theFixed));
   } // if
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAETextLineHeight, 
                  typeShortInteger, &(**teHandle).lineHeight,
                  sizeof (SInt16));
   } // if
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAETextLineAscent, 
                  typeShortInteger, &(**teHandle).fontAscent,
                  sizeof (SInt16));
    } // if
   
   SetPort (svPort);
   
   return err;

} // DoOffset2Pos

Update Active Input Area Event

This event is used to hilite an area of your document as requested by the input method. ViaVoice does not currently use this event.

OSErr
DoUpdateActiveInputArea (DialogPtr inDialog, const
         AppleEvent* inAppleEvent, AppleEvent* /*outReply*/) {

   Size               actualSize;
   DescType         actualType;
   OSErr            err               = noErr;
   AEDesc            theTextDesc      = {};
   AEDesc            theHiliteDesc   = {};
   AEDesc            theUpdateDesc   = {};
   SInt32            fixLength;
   TextRange      thePinRange;
   TEHandle         teHandle         = GetDialogTEHandle (inDialog);
   ScriptLanguageRecord   scriptCode;

   // Required parameters containing firmed text, script,
   // and fixed length...

   if (err == noErr) {
      err = AEGetParamDesc (inAppleEvent, keyAETheData, 
                  typeChar, &theTextDesc);
   } // if
   if (err == noErr) {
      // Note: "Inside Macintosh - Text" says this parameter
      // is under keyAEScriptTag, but in practice it appears to
      // be under keyAETSMScriptTag...
      err = AEGetParamPtr (inAppleEvent, keyAETSMScriptTag, 
                  typeIntlWritingCode, &actualType, &scriptCode,
                  sizeof (scriptCode), &actualSize);
   } // if
   if (err == noErr) {
      // Note: "Inside Macintosh - Text" says this parameter 
      // is required, but in reality, it seems to be optional
      // and not sent (and redundant with the actual size of
      // the data in  theTextDesc) - we won't use or rely
      // on it...
      (void) AEGetParamPtr (inAppleEvent, keyAEFixLength,
                     typeLongInteger, &actualType, &fixLength,
                     sizeof (fixLength), &actualSize);
   } // if
   
   // Optional parameters hilite range list, update range,
   // and Pin range; we don't use any of these...
   
   if (err == noErr) {
      (void) AEGetParamDesc (inAppleEvent, keyAEHiliteRange,
                   typeTextRangeArray, &theHiliteDesc);
   } // if
   if (err == noErr) {
      (void) AEGetParamDesc (inAppleEvent, keyAEUpdateRange,
                   typeTextRangeArray, &theUpdateDesc);
   } // if
   if (err == noErr) {
      (void) AEGetParamPtr (inAppleEvent, keyAEPinRange,
                     typeTextRange, &actualType, &thePinRange,
                     sizeof (thePinRange), &actualSize);
   } // if
   
   // At this point, we need to be inserting text, 
   // most probably...
   
   if (err == noErr) {
      #if TARGET_API_MAC_OS8
         SInt8      hState;
         Hstate = HGetState ((Handle) theTextDesc.dataHandle);
         HLock ((Handle) theTextDesc.dataHandle);
         TEDelete (teHandle);   // Clean first...
         TEInsert (*(theTextDesc.dataHandle), GetHandleSize
            ((Handle) theTextDesc.dataHandle), teHandle);
         HSetState ((Handle) theTextDesc.dataHandle, hState);
      #else
         // AEDescs are opaque under Carbon. So we need
         // to allocate and copy using the accessor APIs.
         // OK, fine...
         Size         dataSize = AEGetDescDataSize (&theTextDesc);
         Handle      dataCopy = NewHandle (dataSize);
         if (dataCopy != NULL) {
            HLock (dataCopy);
            err = AEGetDescData (&theTextDesc, *dataCopy,
                      dataSize);
         } else {
            err = memFullErr;
         } // if
         if (err == noErr) {
            TEDelete (teHandle);   // Clean first...
            TEInsert (*dataCopy, dataSize, teHandle);
         } // if
         if (dataCopy != NULL) {   
            DisposeHandle (dataCopy);
         } // if
      #endif
   } // if
   
   // Clean up...
   
   (void) AEDisposeDesc (&theTextDesc);
   (void) AEDisposeDesc (&theHiliteDesc);
   (void) AEDisposeDesc (&theUpdateDesc);
   
   return err;

} // DoUpdateActiveInputArea

Get Text Event

The GetText event is a mystical event introduced by Apple Japan and, until recently, documented only in Japanese. This event allows the input method to request the application to return text that has already been committed to the document. ViaVoice uses this event to extract text for correction.

More than that, however, ViaVoice expects two additional parameters: the offset start and the offset end. Why is this? Because, unlike simple text-editing Input Methods, ViaVoice distinguishes between the first utterance of the word "the" and the second - it actually keeps track of all the dictated text for a session, the relative words, and so on, in order to make correction work. If ViaVoice were to allow correction simply based on the text of the word, all of the additional contextual information and even the audio data would be useless!

Thankfully, if you use TextEdit, the extra parameters are added for you by ViaVoice, but for other wordprocessing situations, you'll need to add them. Relatively painless, in most cases, since you probably know the offsets of the selection anyway!

OSErr
DoGetSelectedText (DialogPtr inDialog, const AppleEvent*
         /*inAppleEvent*/, AppleEvent* outReply) {

   OSErr            err            = noErr;
   TEHandle         teHandle      = GetDialogTEHandle (inDialog);
   SInt8            hState         = HGetState ((Handle) teHandle);
   SInt64            selStart      = (**teHandle).selStart;
   SInt64            selEnd         = (**teHandle).selEnd;
   
   // The only required return is the current selected text...
   
   HLock ((Handle) teHandle);
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyAETheData, typeText,
                &(*teHandle)[selStart], selEnd-selStart);
   } // if
   
   // For ViaVoice, we also add the numeric values of the
   // start and end of the selection within the text...
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyVVStartSelectionParam,
                typeSInt64, &selStart, sizeof (selStart));
   } // if
   if (err == noErr) {
      err = AEPutParamPtr (outReply, keyVVEndSelectionParam,
                typeSInt64, &selEnd, sizeof (selEnd));
   } // if
   
   HSetState ((Handle) teHandle, hState);
   
   return err;
} // DoGetSelectedText

Set Selection Event

This event is new and unique to ViaVoice. For TextEdit, it is implemented for you in one of the ViaVoice extensions (with one caveat of course: if you don't expect the selection to change within your TextEdit fields, you may be surprised to see it change, or if you duplicate the selection range in one of your own data structures, you may end up out of sync).

Really, all this does is ask you application to change the active selection. This is necessary so that commands such as "correct 'the'" will work as the user expects them.

OSErr
DoSetSelection (DialogPtr inDialog, const AppleEvent*
          inAppleEvent, AppleEvent* /*outReply*/) {

   Size                  actualSize;
   DescType            actualType;
   OSErr               err            = noErr;
   TEHandle            teHandle      = GetDialogTEHandle (inDialog);
   SInt64               selStart;
   SInt64               selEnd;
   Boolean            doDraw         = false;
   // This is a ViaVoice-specific event. Retrieve the
   // selection, and the optional draw event, and do it...
   if (err == noErr) {
      err = AEGetParamPtr (inAppleEvent,
               keyVVStartSelectionParam, typeSInt64, &actualType,
               &selStart, sizeof (selStart), &actualSize);
   } // if
   if (err == noErr) {
      err = AEGetParamPtr (inAppleEvent,
               keyVVEndSelectionParam, typeSInt64, &actualType, 
               &selEnd, sizeof (selEnd), &actualSize);
   } // if
   if (err == noErr) {
      (void) AEGetParamPtr (inAppleEvent,
               keyVVDrawSelectionParam, typeBoolean, &actualType, 
               &doDraw, sizeof (doDraw), &actualSize);
   } // if

   // Clip off the ends to TextEdit range...

   if (err == noErr) {
      if (selEnd > 0x7fff) {
         selEnd = 0x7fff;
      } // if
      if (selStart > 0x7fff) {
         selStart = 0x7fff;
      } // if
      if (selStart < 0) {
         selStart = 0;
         err = paramErr;
      } // if
      if (selEnd < 0) {
         selEnd = 0;
         err = paramErr;
      } // if
   } // if
   if (err == noErr) {
      TESetSelect (selStart & 0x7fff, selEnd & 0x7fff,
                            teHandle);
   } // if
   if ((err == noErr) && doDraw) {
      // We would optionally draw here, but TESetSelect does
      // that anyway, there's no point. The idea is that the
      // screen will flicker if you honor this optional
      // parameter...
   } // if
      
   return err;

} // DoSetSelection



Fine Tuning

The Mac OS is a cooperative multitasking system. ViaVoice direct dictation involves the cooperation of at least four different applications, all of which need a slice of time. See Figure 5 for an overview of how the components interact. The recognition engine, in fact, will shut off if it doesn't get enough time to handle the incoming audio stream which, like the real world, isn't very cooperative. Audio data is big, so the amount of time before a shutdown is small. You can simulate this by clicking on a menu in Mac OS while the microphone is on.


Figure 5. Interprocess Communication within ViaVoice - and to your application - requires that everybody share the processor equitably!

So, what you need to keep in mind that your application, when in the foreground, should be as friendly as it can be with the other processes, particularly calling WaitNextEvent frequently enough that the recognition engine gets time to process audio, the dictation manager gets enough time to assemble words and send them to your application, and the VoiceCenter gets enough time to communicate status and feedback to the user. Some applications try to avoid WaitNextEvent in order to improve their own apparent performance, but if you do this with ViaVoice, you won't get very good throughput, and you may even starve the engine to shutdown.

Not everything that you can type into is appropriate for dictation - sure, you could say that text is text, but dictation isn't really the same as data entry. Right now, there is no way to restrict dictation to numbers, or constrain the dictation search to a single word answer or a set of words that might be appropriate for a given field. Rather, right now, we're focussing on freeing up the keyboard and mouse so that the user can speak and think for prolonged periods in large bodies of text, like letters, email, or other prose. This is not for typing in a choice of eleven point text!

Test Drive

Bring up your application, start ViaVoice direct dictation services, activate the dictation system with the phrase "begin direct dictation", and then, when the system is ready, click in a text field of your application, and dictate the phrase "Please write to Mr. Wright right away [period]". After a couple seconds, you should see the text appear. If there are no errors, say "correct mister", and "Mr." will hilite, and the correction window will open with alternatives, as in Figure 6.


Figure 6. Correcting in your application.

Then, you can pick one of the alternatives, or say "close correction window". For more things to try, consult the ViaVoice Users Guide.

Getting Creative

Beyond direct dictation, there are other things you can do. You can write AppleScripts to control your application to perform routine operations. You could even have a "secret about box" phrase bring up a nice little Easter egg in your product. Likewise, you could have other key phrases that, rather than processing as text, trigger behaviors or commands. I'm sure there are other things that I've not thought of yet.

Future Directions

"Prediction is difficult, especially about the future."

A word about future versions. Simple extrapolation from ViaVoice Millennium late last year to Enhanced in the middle of this year should suggest that the ViaVoice for Mac team has been busy, and continues to be busy, adding features, fixing bugs, and getting the product into our customers' hands. I cannot say what will result from this activity, but it is likely that developer opportunities, already greatly expanded with Enhanced, will continue to grow as the product line itself evolves and matures.

An interesting point on this topic is that ViaVoice is the only speech software technology that is currently available for and has an installed base on Windows, Mac OS, and several flavors of Linux. If you're thinking cross-platform speech, this is where you want to be.

MicroPhone Off

There you have it. The amount of work you need to get dictation into your application varies from "it works already for free" to "I had to write a couple of Apple event handlers." Beyond that, you can get as creative as you want.

ViaVoice for Macintosh has been a best seller since its introduction in 1999, and with ubiquitous dictation availability in the latest edition, there's a good chance that many of your customers will have ViaVoice, and want to dictate into your application. Believe me, the number of comments about "I'd like to be able to dictate into Application X" exceeds just about any other feature request. Make it so!

Acknowledgements

I would like to thank Deborah Grits, Eddie Epstein, Jeff Kusnitz, and Paul McLellan for taking the time to give me feedback on this article as it was being written. My special thanks to the rest of the ViaVoice for Mac team who broke new ground on the Mac - twice - and helped bring the future closer.

Links


Erik has been working on Mac development throughout modern history, and, in that time, has done everything from drivers to GUI, and from telecom to graphics processing. Last year, in search of new challenges, he joined the ViaVoice for Mac team at IBM in Florida, and has led the recent release of the Enhanced Edition, which, it is not commonly known, is written specifically for the Mac from the ground up. In his spare time, he breeds noncarnivorous slinkys in the desks of unsuspecting coworkers. You can reach Erik at esea@us.ibm.com.


© Copyright International Business Machines Corporation, 2000. All Rights Reserved.

Note to U.S. Government Users --- Documentation related to restricted rights --- Use, duplication or disclosure is subject to restrictions set forth in GS ADP Schedule Contract with IBM Corp. INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS ARTICLE "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

This publication could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes may be incorporated in new editions of the article. In addition, IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this article at any time.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites and use of those Web sites is at your own risk. Finally, this article contains sample application programs in source language, which illustrates programming techniques for the subject matter. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Make the passage of time your plaything...
While some of us are still waiting for a chance to get our hands on Ash Prime - yes, don’t remind me I could currently buy him this month I’m barely hanging on - Digital Extremes has announced its next anticipated Prime Form for Warframe. Starting... | Read more »
If you can find it and fit through the d...
The holy trinity of amazing company names have come together, to release their equally amazing and adorable mobile game, Hamster Inn. Published by HyperBeard Games, and co-developed by Mum Not Proud and Little Sasquatch Studios, it's time to... | Read more »
Amikin Survival opens for pre-orders on...
Join me on the wonderful trip down the inspiration rabbit hole; much as Palworld seemingly “borrowed” many aspects from the hit Pokemon franchise, it is time for the heavily armed animal survival to also spawn some illegitimate children as Helio... | Read more »
PUBG Mobile teams up with global phenome...
Since launching in 2019, SpyxFamily has exploded to damn near catastrophic popularity, so it was only a matter of time before a mobile game snapped up a collaboration. Enter PUBG Mobile. Until May 12th, players will be able to collect a host of... | Read more »
Embark into the frozen tundra of certain...
Chucklefish, developers of hit action-adventure sandbox game Starbound and owner of one of the cutest logos in gaming, has released their roguelike deck-builder Wildfrost. Created alongside developers Gaziter and Deadpan Games, Wildfrost will... | Read more »
MoreFun Studios has announced Season 4,...
Tension has escalated in the ever-volatile world of Arena Breakout, as your old pal Randall Fisher and bosses Fred and Perrero continue to lob insults and explosives at each other, bringing us to a new phase of warfare. Season 4, Into The Fog of... | Read more »
Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links below... | Read more »
Marvel Future Fight celebrates nine year...
Announced alongside an advertising image I can only assume was aimed squarely at myself with the prominent Deadpool and Odin featured on it, Netmarble has revealed their celebrations for the 9th anniversary of Marvel Future Fight. The Countdown... | Read more »
HoYoFair 2024 prepares to showcase over...
To say Genshin Impact took the world by storm when it was released would be an understatement. However, I think the most surprising part of the launch was just how much further it went than gaming. There have been concerts, art shows, massive... | Read more »
Explore some of BBCs' most iconic s...
Despite your personal opinion on the BBC at a managerial level, it is undeniable that it has overseen some fantastic British shows in the past, and now thanks to a partnership with Roblox, players will be able to interact with some of these... | Read more »

Price Scanner via MacPrices.net

You can save $300-$480 on a 14-inch M3 Pro/Ma...
Apple has 14″ M3 Pro and M3 Max MacBook Pros in stock today and available, Certified Refurbished, starting at $1699 and ranging up to $480 off MSRP. Each model features a new outer case, shipping is... Read more
24-inch M1 iMacs available at Apple starting...
Apple has clearance M1 iMacs available in their Certified Refurbished store starting at $1049 and ranging up to $300 off original MSRP. Each iMac is in like-new condition and comes with Apple’s... Read more
Walmart continues to offer $699 13-inch M1 Ma...
Walmart continues to offer new Apple 13″ M1 MacBook Airs (8GB RAM, 256GB SSD) online for $699, $300 off original MSRP, in Space Gray, Silver, and Gold colors. These are new MacBook for sale by... Read more
B&H has 13-inch M2 MacBook Airs with 16GB...
B&H Photo has 13″ MacBook Airs with M2 CPUs, 16GB of memory, and 256GB of storage in stock and on sale for $1099, $100 off Apple’s MSRP for this configuration. Free 1-2 day delivery is available... Read more
14-inch M3 MacBook Pro with 16GB of RAM avail...
Apple has the 14″ M3 MacBook Pro with 16GB of RAM and 1TB of storage, Certified Refurbished, available for $300 off MSRP. Each MacBook Pro features a new outer case, shipping is free, and an Apple 1-... Read more
Apple M2 Mac minis on sale for up to $150 off...
Amazon has Apple’s M2-powered Mac minis in stock and on sale for $100-$150 off MSRP, each including free delivery: – Mac mini M2/256GB SSD: $499, save $100 – Mac mini M2/512GB SSD: $699, save $100 –... Read more
Amazon is offering a $200 discount on 14-inch...
Amazon has 14-inch M3 MacBook Pros in stock and on sale for $200 off MSRP. Shipping is free. Note that Amazon’s stock tends to come and go: – 14″ M3 MacBook Pro (8GB RAM/512GB SSD): $1399.99, $200... Read more
Sunday Sale: 13-inch M3 MacBook Air for $999,...
Several Apple retailers have the new 13″ MacBook Air with an M3 CPU in stock and on sale today for only $999 in Midnight. These are the lowest prices currently available for new 13″ M3 MacBook Airs... Read more
Multiple Apple retailers are offering 13-inch...
Several Apple retailers have 13″ MacBook Airs with M2 CPUs in stock and on sale this weekend starting at only $849 in Space Gray, Silver, Starlight, and Midnight colors. These are the lowest prices... Read more
Roundup of Verizon’s April Apple iPhone Promo...
Verizon is offering a number of iPhone deals for the month of April. Switch, and open a new of service, and you can qualify for a free iPhone 15 or heavy monthly discounts on other models: – 128GB... Read more

Jobs Board

Relationship Banker - *Apple* Valley Financ...
Relationship Banker - Apple Valley Financial Center APPLE VALLEY, Minnesota **Job Description:** At Bank of America, we are guided by a common purpose to help Read more
IN6728 Optometrist- *Apple* Valley, CA- Tar...
Date: Apr 9, 2024 Brand: Target Optical Location: Apple Valley, CA, US, 92308 **Requisition ID:** 824398 At Target Optical, we help people see and look great - and Read more
Medical Assistant - Orthopedics *Apple* Hil...
Medical Assistant - Orthopedics Apple Hill York Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Now Read more
*Apple* Systems Administrator - JAMF - Activ...
…**Public Trust/Other Required:** None **Job Family:** Systems Administration **Skills:** Apple Platforms,Computer Servers,Jamf Pro **Experience:** 3 + years of Read more
Liquor Stock Clerk - S. *Apple* St. - Idaho...
Liquor Stock Clerk - S. Apple St. Boise Posting Begin Date: 2023/10/10 Posting End Date: 2024/10/14 Category: Retail Sub Category: Customer Service Work Type: Part Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.