Re: Hardware vs. Software Speech Synthesizers
toggle quoted messageShow quoted text
You also can install a second sound card in your computer, and setup jaws to work on one sound card, and the other sounds from the second sound card.
In cases that you work with audio, jaws will work only in one sound card, and the audio in the other. Like this you can separate both sounds, and work with the audio in a more convenient way.
From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of Page, Aaron
Sent: Tuesday, January 17, 2017 1:43 PM
Subject: Hardware vs. Software Speech Synthesizers
Moderators – feel free to shut the thread down if this isn’t JAWS relevant enough. Another thread discussing issues with a TripleTalk USB speech synthesizer got me thinking about this...
I am curious what benefits JAWS or other screen reading software users get from the use of internal or external hardware speech synthesizers compared to software-based speech synthesizers. Doing some reading on Google the best I found was this brief statement on the AFB site:
“Instead of using the system's sound card, these devices create and emit speech through their own speaker system. This not only frees up system resources on the PC, but it also allows the sound card to be used exclusively for other audio. “
Is this the main benefit of using hardware-based synthesizers? Resources doesn’t seem like an issue anymore – average PCs have more power than most users need – and the ability to direct speech output to a different sound card seems like a really specific and rare use case to me.
I am sure there are some users who stick with a particular hardware synthesizer just because they are used to the way it sounds, but I am curious what other benefits there might be. Thanks for any info you are willing to share!
Aaron M. Page