Hardware vs. Software Speech Synthesizers
Page, Aaron
Moderators – feel free to shut the thread down if this isn’t JAWS relevant enough. Another thread discussing issues with a TripleTalk USB speech synthesizer got me thinking about this...
I am curious what benefits JAWS or other screen reading software users get from the use of internal or external hardware speech synthesizers compared to software-based speech synthesizers. Doing some reading on Google the best I found was this brief statement on the AFB site:
“Instead of using the system's sound card, these devices create and emit speech through their own speaker system. This not only frees up system resources on the PC, but it also allows the sound card to be used exclusively for other audio. “
Is this the main benefit of using hardware-based synthesizers? Resources doesn’t seem like an issue anymore – average PCs have more power than most users need – and the ability to direct speech output to a different sound card seems like a really specific and rare use case to me.
I am sure there are some users who stick with a particular hardware synthesizer just because they are used to the way it sounds, but I am curious what other benefits there might be. Thanks for any info you are willing to share!
Aaron M. Page
|
|
Dave...
On one of my computers, I use the hardware synth
because it is more responsive (faster response). But the other point of having a
separate sound card for internal audio aside from JAWS is another
benefit.
Dave
Oregonian, woodworker, Engineer, Musician, and Pioneer
----- Original Message -----
From: Page,
Aaron
Sent: Tuesday, January 17, 2017 10:42
Subject: Hardware vs. Software Speech Synthesizers Moderators – feel free to shut the thread down if this isn’t JAWS relevant enough. Another thread discussing issues with a TripleTalk USB speech synthesizer got me thinking about this...
I am curious what benefits JAWS or other screen reading software users get from the use of internal or external hardware speech synthesizers compared to software-based speech synthesizers. Doing some reading on Google the best I found was this brief statement on the AFB site:
“Instead of using the system's sound card, these devices create and emit speech through their own speaker system. This not only frees up system resources on the PC, but it also allows the sound card to be used exclusively for other audio. “
Is this the main benefit of using hardware-based synthesizers? Resources doesn’t seem like an issue anymore – average PCs have more power than most users need – and the ability to direct speech output to a different sound card seems like a really specific and rare use case to me.
I am sure there are some users who stick with a particular hardware synthesizer just because they are used to the way it sounds, but I am curious what other benefits there might be. Thanks for any info you are willing to share!
Aaron M. Page
|
|
Pablo Morales
You also can install a second sound card in your computer, and setup jaws to work on one sound card, and the other sounds from the second sound card. In cases that you work with audio, jaws will work only in one sound card, and the audio in the other. Like this you can separate both sounds, and work with the audio in a more convenient way.
From: main@jfw.groups.io [mailto:main@jfw.groups.io] On Behalf Of Page, Aaron
Sent: Tuesday, January 17, 2017 1:43 PM To: main@jfw.groups.io Subject: Hardware vs. Software Speech Synthesizers
Moderators – feel free to shut the thread down if this isn’t JAWS relevant enough. Another thread discussing issues with a TripleTalk USB speech synthesizer got me thinking about this...
I am curious what benefits JAWS or other screen reading software users get from the use of internal or external hardware speech synthesizers compared to software-based speech synthesizers. Doing some reading on Google the best I found was this brief statement on the AFB site:
“Instead of using the system's sound card, these devices create and emit speech through their own speaker system. This not only frees up system resources on the PC, but it also allows the sound card to be used exclusively for other audio. “
Is this the main benefit of using hardware-based synthesizers? Resources doesn’t seem like an issue anymore – average PCs have more power than most users need – and the ability to direct speech output to a different sound card seems like a really specific and rare use case to me.
I am sure there are some users who stick with a particular hardware synthesizer just because they are used to the way it sounds, but I am curious what other benefits there might be. Thanks for any info you are willing to share!
Aaron M. Page
|
|
Brian Buhrow
Hello. In my work, I use software speech for the most part. Howevr, I
work an environhment with 2 sound cards so that my synthesizer is using one and other audio uses the other. I find this works much better for me than trying to mix audio streams through the same sound card. While the case for hardware speech has become much less compelling in recent years, here are a few reasons that hardware speech is still advantageous: O If you don't have a separate sound card and you really don't want to mix regular audio and your speech synthesizer, hardware speech is the only way to go. O While software speech doesn't take much in the way of system resources on today's systems, it does require some resource, and, in particular, it's particularly sensitive to time lags on the part of a busy computer system. So, if you're doing something that's very CPU or i/o intensive on your system, off-loading the job of creating speech to a peripheral can be very helpful. O Higher compatibility with different operating systems. Software speech requires that a software module be running on the computer generating the speech that's devoted to that purpose. If you're running a system where that software module isn't available, you can't have software speech. Hardware synthesizers, on the other hand, usually attach via a serial or USB port, meaning they can interface with a large number of systems with a variety of operating systems. And, to drive them, often you just send strings of text down a serial port or through a USB connection. That, generally is much easier than generating the sound data a software synthesizer produces. I'm sure there are other scenarios where hardware speech is very valuable and I'm eager to see what folks have to say on this topic. -Brian
|
|
Mario
as others have already said, for another advantage, the speech can be
toggle quoted messageShow quoted text
separate from the main sound card. for example, you can have a screen reader speak thru the hardware synth while playing music thru the main sound card, and while using the computer, eliminate the saying "what did (whatever screen reader) say?" having/using a hardware synth also eliminates the memory and resource consumption by the screen reader being used.
-------- Original Message --------
From: Page, Aaron [mailto:aaron.page@...] Sent: Tuesday, Jan 17, 2017 1:42 PM EST To: main@jfw.groups.io Subject: Hardware vs. Software Speech Synthesizers Moderators – feel free to shut the thread down if this isn’t JAWS relevant enough. Another thread discussing issues with a TripleTalk USB speech synthesizer got me thinking about this... I am curious what benefits JAWS or other screen reading software users get from the use of internal or external hardware speech synthesizers compared to software-based speech synthesizers. Doing some reading on Google the best I found was this brief statement on the AFB site: “Instead of using the system's sound card, these devices create and emit speech through their own speaker system. This not only frees up system resources on the PC, but it also allows the sound card to be used exclusively for other audio. “ Is this the main benefit of using hardware-based synthesizers? Resources doesn’t seem like an issue anymore – average PCs have more power than most users need – and the ability to direct speech output to a different sound card seems like a really specific and rare use case to me. I am sure there are some users who stick with a particular hardware synthesizer just because they are used to the way it sounds, but I am curious what other benefits there might be. Thanks for any info you are willing to share! Aaron M. Page
|
|
Tony
When you can add a second sound card, either internal or USB, for $10 there
toggle quoted messageShow quoted text
isn't much difference any more. I used to be able to load an external synthesizer before windows if I needed to correct some problem then continue with loading windows. Hasn't been possible for quite a while. Tony
-----Original Message-----
From: main@jfw.groups.io [mailto:main@jfw.groups.io] On Behalf Of Mario Sent: Tuesday, January 17, 2017 1:40 PM To: main@jfw.groups.io Subject: Re: Hardware vs. Software Speech Synthesizers as others have already said, for another advantage, the speech can be separate from the main sound card. for example, you can have a screen reader speak thru the hardware synth while playing music thru the main sound card, and while using the computer, eliminate the saying "what did (whatever screen reader) say?" having/using a hardware synth also eliminates the memory and resource consumption by the screen reader being used. -------- Original Message -------- From: Page, Aaron [mailto:aaron.page@...] Sent: Tuesday, Jan 17, 2017 1:42 PM EST To: main@jfw.groups.io Subject: Hardware vs. Software Speech Synthesizers Moderators - feel free to shut the thread down if this isn't JAWS relevant enough. Another thread discussing issues with a TripleTalk USB speech synthesizer got me thinking about this... I am curious what benefits JAWS or other screen reading software users get from the use of internal or external hardware speech synthesizers compared to software-based speech synthesizers. Doing some reading on Google the best I found was this brief statement on the AFB site: "Instead of using the system's sound card, these devices create and emit speech through their own speaker system. This not only frees up system resources on the PC, but it also allows the sound card to be used exclusively for other audio. " Is this the main benefit of using hardware-based synthesizers? Resources doesn't seem like an issue anymore - average PCs have more power than most users need - and the ability to direct speech output to a different sound card seems like a really specific and rare use case to me. I am sure there are some users who stick with a particular hardware synthesizer just because they are used to the way it sounds, but I am curious what other benefits there might be. Thanks for any info you are willing to share! Aaron M. Page
|
|
Lisle, Ted (CHFS DMS)
I've tried them both over the years. As you stated, resources have not been a problem since the late '90's. I fined the sound through even average desktop or monitor speakers to be better than that from most external devices. If you have an old DecTalk speaker around with an AF gain and a headset output, it might be a close contest though.
toggle quoted messageShow quoted text
Ted
-----Original Message-----
From: main@jfw.groups.io [mailto:main@jfw.groups.io] On Behalf Of Brian Buhrow Sent: Tuesday, January 17, 2017 2:13 PM To: main@jfw.groups.io Subject: Re: Hardware vs. Software Speech Synthesizers Hello. In my work, I use software speech for the most part. Howevr, I work an environhment with 2 sound cards so that my synthesizer is using one and other audio uses the other. I find this works much better for me than trying to mix audio streams through the same sound card. While the case for hardware speech has become much less compelling in recent years, here are a few reasons that hardware speech is still advantageous: O If you don't have a separate sound card and you really don't want to mix regular audio and your speech synthesizer, hardware speech is the only way to go. O While software speech doesn't take much in the way of system resources on today's systems, it does require some resource, and, in particular, it's particularly sensitive to time lags on the part of a busy computer system. So, if you're doing something that's very CPU or i/o intensive on your system, off-loading the job of creating speech to a peripheral can be very helpful. O Higher compatibility with different operating systems. Software speech requires that a software module be running on the computer generating the speech that's devoted to that purpose. If you're running a system where that software module isn't available, you can't have software speech. Hardware synthesizers, on the other hand, usually attach via a serial or USB port, meaning they can interface with a large number of systems with a variety of operating systems. And, to drive them, often you just send strings of text down a serial port or through a USB connection. That, generally is much easier than generating the sound data a software synthesizer produces. I'm sure there are other scenarios where hardware speech is very valuable and I'm eager to see what folks have to say on this topic. -Brian
|
|