Posted by: vertigo« on: December 20, 2020, 04:58:50 »
This basically sounds like a BIG.little setup, except just one smart little core that uses "AI" (unclear if real or buzzword in this case) to determine what tasks to handle vs relying on the OS for instructions.To me, it doesn't sound like it at all. It sounds like it has two basic functions. 1) System monitoring, where it's all about efficiency. It might be programmable, but it won't be running software (I imagine it won't be x86 at all). As I said, it's all about efficiency. It might be very slow as monitoring probably doesn't take much computational power (but you waste a lot of energy by waking up the CPU). Who knows. Depends on what exactly it is they're trying to offload. 2) AI, probably focused on reducing power draw. I don't know what you consider proper AI. We're probably talking about computer vision, recognizing people, perhaps hardware acceleration for face recognition (since they mention security as well). Hopefully, it will work with IR cameras used for Hello as well so that you can still use a shutter on the webcam.
What they're trying to do is difficult enough with a webcam. Trying to do it without it is even sillier. Have you never stood further away, off to the side from a laptop but still watched the screen? Even with a camera, who knows how it will work in such a scenario (what is the field of vision, etc.). It's quite annoying when displays automatically dim when you're trying to watch them.
That's basically the point of BIG.little, though, to use slow, efficient cores when possible to avoid having to use the more power-hungry ones. Not exactly the same, but similar concept. At least that's how it seems to me, but I'm sure there's more to it.
And yes, certain things will likely not be possible without the webcam and others will be more difficult, but what I'm saying is that, as much as possible, they should try to support the functions with other sensors as well, instead of just relying on the webcam, to allow them to work when the shutter is closed as opposed to simply losing all those functions and making the chip worthless. I guess I've never had the problem you describe, as when I'm using a laptop, I'm using it, and I'm not sure why the screen would dim since I set it to a brightness and it stays there (I assume you're talking about a power saving setting that dims it when not in use), but my point is that a depth sensor should be able to do much of what a camera would do for such tasks. For example, instead of a camera being used to see that you're in front of the computer, even if several feet away, a depth sensor could do the same, thereby keeping the screen brightness up. There would obviously be some limitations, but it would still work far better than relying solely on the camera and having the shutter closed.