I remember how much fun it wasn't trying to get through adjusting some Paypal settings when I had to call them, as the control to get where devices are is not something I could find. Luckily, they were able to do what needed to be done from their end, as the problem was that my push notifications became disabled with no way to turn it back on.
toggle quoted messageShow quoted text
----- Original Message -----
From: "Marten Post Uiterweer" <firstname.lastname@example.org>
Sent: Monday, February 01, 2016 1:41 PM
Subject: Re: How Can Sighted People Tell Where I Am At on a Screen in JAWS?
True. A few weeks ago, a sighted person talked me through a webpage. The
brailleviewer was on, so he saw what Jaws was showing, but he had to
compare it wit the webpage each time to see, where I was on the page.
On Mon, 01 Feb 2016 11:36:45 -0800 "Brian Vogel" <email@example.com> wrote:
On Mon, Feb 1, 2016 at 11:24 am, Marten Post Uiterweer <firstname.lastname@example.org> wrote:
The brailleviewer is verry usefull. Ofcource it will not show things in
braille. It will show the text that is also shown on a brailledisplay
and a brailledisplay will show what Jaws speaks, so the brailleviewer
will also show what is spoken. Not completely, but for the most part.
This can indeed be very useful in its own right, but take it from a sighted helper, it doesn't solve the original problem posed. Most of us can tell precisely what JAWS is reading and saying, the problem is we have absolutely no idea where that is on the web page itself. If you're on a text-rich webpage in particular, long wikipedia pages are an excellent example, JAWS can be reading multiple scrolled pages ahead of what has been left visible on the screen. Trying to figure out where that actually is on the web page itself is often really a major production that breaks both flow and train of thought for the listener.
I still do not have a reply from FS Technical Support of whether there actually is a practical way to make JAWS force Windows to scroll the web browser such that what's being read corresponds to what an assistant can actually see on the screen at that moment, at least somewhere on that screen.