Hello. In my work, I use software speech for the most part. Howevr, I
work an environhment with 2 sound cards so that my synthesizer is using one
and other audio uses the other. I find this works much better for me than
trying to mix audio streams through the same sound card.
While the case for hardware speech has become much less compelling in
recent years, here are a few reasons that hardware speech is still
O If you don't have a separate sound card and you really don't want to
mix regular audio and your speech synthesizer, hardware speech is the only
way to go.
O While software speech doesn't take much in the way of system resources
on today's systems, it does require some resource, and, in particular, it's
particularly sensitive to time lags on the part of a busy computer system.
So, if you're doing something that's very CPU or i/o intensive on your
system, off-loading the job of creating speech to a peripheral can be very
O Higher compatibility with different operating systems. Software speech
requires that a software module be running on the computer generating the
speech that's devoted to that purpose. If you're running a system where
that software module isn't available, you can't have software speech.
Hardware synthesizers, on the other hand, usually attach via a serial or
USB port, meaning they can interface with a large number of systems with a
variety of operating systems. And, to drive them, often you just send
strings of text down a serial port or through a USB connection. That,
generally is much easier than generating the sound data a software
I'm sure there are other scenarios where hardware speech is very
valuable and I'm eager to see what folks have to say on this topic.