The basic mechanism in both cases is measuring the time taken for a patch of light that is steadily moving from a start point to become visible to a detector, and then using knowledge of the scan rate and the geometry of the detector to calculate orientation.
Valve's system is considerably more advanced than Nintendo's, of course. But then they have had twenty years to work on it (i think that's about two and a half years in Valve Time).
The physics and engineering of light are not something I'm very knowledgeable about, so forgive me if this is naive. Couldn't you reduce the latency further by running the x and y scans in parallel, and using something (e.g. different modulations) to tell them apart? It seems kind of a waste to run the x and then the y in sequence.
If I'm not mistaken, most standard IR sensors have built-in demodulators. They most likely want to keep the parts needed for a receiver as off the shelf as possible.
I wonder how it'll compare to the system Technical Illusions uses for castAR's tracking. Especially given the divergence of the two teams.
(Technical Illusions being the company of Jeri and Rick, who left Valve taking the Augmented Reality tech with them.
Jeri Ellsworth basically built the Valve hardware division. Valve worked on AR and VR simultaneously and then canned the AR stuff. Gabe let Jeri take the AR stuff when that happened.
An interesting story that I've oversimplified. I think it was on an AmpHour podcast but don't quote me.)
My understanding is that technical Illusions tracking system is tied to the retro-reflective mat. The glasses see some LEDs on the mat and use that to determine their position and orientation. So the tracking is limited to the glasses being in view of the mat. If you turn around, you're no longer being tracked.
The Lighthouse system, on the other hand, is a full volume tracking system that can be expanded to cover any volume of space. So I could walk around a room and no matter where I am or where I'm looking, I can determine my location and orientation.
>My understanding is that technical Illusions tracking system is tied to the retro-reflective mat. The glasses see some LEDs on the mat and use that to determine their position and orientation. So the tracking is limited to the glasses being in view of the mat. If you turn around, you're no longer being tracked.
There's a wand PCB and a wand on there too, they aren't part of the marker.)
The tracking marker is also covered in retro-reflective material, but it's not part of the mat itself. Relevant because there are clip-on adapters to turn the castAR goggles from AR to VR, and you don't need the mat for VR.
It may not be the final version, as the tracking system has gone through a few upgrades and tweaks already according to the updates.
That's not a limitation of the hardware; castAR has cameras and IMUs, and it could orient itself via any identifiable reference point, which need not be the mat. So this is more a problem of software than of hardware.
This sounds very cool but from what I understand from the article I guess it would require direct emitter-receiver visibility, i.e. turning around and facing away from the emitter wouldn't work.
Similarly, if you used several emitters to circumvent above problem you'd run into aliasing issues and you'd have to engineer the system such that different emitters can be clearly distinguished from each other.
The low latency is a big deal though. See-through AR is a particularly hard nut in terms of latency because just the slightest bit of it can completely destroy the experience (overlay images lagging behind real world), unlike VR where all the photons come from one source and some latency is tolerable.
The lighthouses seem to have a 90° sweep. The sensors on the Vive headset (but for some reason not the valve controllers) seem to be in 90° occlusion pits. This means that if the two lighthouses are at opposite corners of a (square) room, for the most part no single sensor will ever pick up both lighthouses at once, no matter where the headset is in the room or how it is oriented.
I think that was the initial strategy, but now they are going with a timing based or modulation based solution, and that's why the controllers, which were developed later, don't have occlusion pits. In this interview with Alan Yates of Valve, he mentions timing, modulation, the role of the LED array, and getting rid of the wire between lighthouses: https://soundcloud.com/hackertrips/alan-yates-of-valve-talks...
In theory if you could place a second bank of receivers behind your head and had an accurate distance between the two you could simply the problem to just requiring line of sight.
It also sounds like they have a multiple device fix available, the only question being can they devise a way to synchronize two of the devices without requiring the user to do any measurements.
you can stack as many lighthouse units as you want in an arbitrary configuration (which will kill for having huge spaces spanning multiple rooms). i believe the vive headset demo had 2 or 3 units covering 15x15 feet area.
they also have sub millimeter accuracy, to the point that people can pick up physical objects mapped in 3d space, which was how valve demoed picking up the vive controllers whilst you had the helmet on, so you saw the controllers in the 3d space, and they were exactly where they were in meatspace.
That's actually brilliant. I assume you then need two lighthouses so you can do triangulation and get the position of your sensor rather just the angle to the lighthouse.
Nope, a single Lighthouse transmitter can give full position -- at some accuracy. If you use multiple photodiodes (receivers) with a known rigid-body transformation between them, you can infer the receiver unit's full 6-DoF pose.
For example, you could (naively) estimate distance using two photodiodes if you knew that they were separated by 5cm. If the difference in measured angles is small, they're far away. If it's large, then they're near. (The full 6-DoF situation is a little more complex, but the same idea.)
> It can actually (poorly) track a moving object with just one sensor and one base station...
We didn't expect this at all, it is not immediately intuitive, but even 2D simulations display this behavior. If you can tell me why, really why, you should apply for a job at Valve. :)
(Apparently 5 sensor hits are required for a proper fix.)
You would be able to get some distance information via the modulation on the signal. If your photodiode response was able to pick up phase changes, you can estimate distance. The light arriving at closer diodes will have a different phase than the further diodes. The article mentiones "MHz" intensity modulation on the does which gives you several meter wavelength.
Time of flight as you describe could work at the transmitter, but could not work at the Lighthouse receiving photodiode. To measure time of flight, you need to know (very precisely) when the light was initially transmitted. The synchronization flash is insufficient for this, as an error of just 1ns results in a 1ft error -- modulation or not. Even with modulation, the receiving photodiode doesn't have another signal with which to compare for ToF measurements.
Normal laser rangefinders are transmitting the light (modulated), and looking for the reflected return (modulated) at the same location as the transmitter. They use a PLL to determine the phase difference between the TX and RX (modulated) signals -- since the transmitter has both versions readily at hand. The phase difference corresponds to a time difference => distance. Note that the "reflector" (where the photodiode sits in Lighthouse) is not part of the equation.
In other words, laser rangefinder ToF measurements require coherent demodulation at the location of transmission.
I wasn't referring to ToF measurement, I was referring to two diodes measuring the difference in phase of a subcarrier intensity modulated signal. Essentially the subcarrier is a intensity modulation of the light. At XX MHz the optical output power will go from 100% to 0% and then back. If two diodes receive the light at the same distance then difference in phase will be zero (i.e. correlating the two signals will have a 0 lag). If the two diodes are separated by a distance, then one diode will receive a greater optical signal and one will receive a smaller optical signal. Correlating the two received signals (or measuring the phase difference) will directly correspond to the difference in distance between the two diodes - this is because the light travels at a fixed speed and the intensity of the signal directly corresponds to the physical distance. Obviously there isn't a unique solution since the diodes could be spaced at multiples of the wavelength, but for a fixed area this isn't a problem.
This only requires knowledge of the subcarrier modulation frequency, as any diode can be picked as the reference and the others matched to it. This isn't an ideal way of doing distance measurement because noise will greatly effect the estimate, but it's certainly do-able.
I know how Lighthouse works -- I wrote the article. :) I was responding to the GP: "You would be able to get some distance information via the modulation on the signal."
I was telling GP, (1) how Lighthouse cannot use time of flight, and (2) how time of flight works for laser rangefinders.
> MHz intensity modulation gives you several meter wavelength
What do you mean by this? The light itself is not at MHz frequency, just the modulation of its intensity. It is infrared, which is ~700nm => 100s of THz.
When you modulate the light, the unambiguous range is not determined by the wavelength of light -- it's determined by the wavelength of the modulation. This is a pretty fundamental concept in radar (see FMCW radars). It's sort of like looking for the 0-to-1 transitions of the light instead of looking at the phase of the reflected light itself.
If you were to directly compare unmodulated light (the light's phase), then you'd actually be creating an interferometer. The distance measurement would be ambiguous beyond 1 wavelength (a very small value!). For example, you might measure a difference of 0.3\lambda, but you don't know how many full wavelengths away you are on top of that. So you might be 10um or 5000010um away. This is where modulation helps.
This isn't that complicated, and would be crazy expensive if it was. The modulation is just a way to identify your light house and filter out others. Ir remotes use 40kHz or so, this uses mhz, probably with some sort of digital scrambling and filter, or boring channels. You would only need to have a few channels.
I know how Lighthouse works (I wrote the article). :) Like I said, " Like many IR systems, the LEDs and lasers are actually modulated (Alan said, "on the order of MHz"). This is useful for a few reasons: (1) to distinguish the desired light signals from other IR interferers such as the sun; and (2) to permit multiple transmitters with different modulation frequencies."
I was responding to the GP: "You would be able to get some distance information via the modulation on the signal." I was telling GP, (1) how Lighthouse cannot use time of flight, and (2) how time of flight works for laser rangefinders, where modulation is actually used for ToF measurements.
Ah yes you could use time of flight even in the MHz range, where the wavelength of light is 300 meters if you have an accurate enough receiver, but that should be a problem.
I think the precision should be on the order of ~ (A/D relative precision) * (wavelength), so for mm accuracy you need an A/D converted with more then 10^(-9) precision (that is, a >30 bit D/A), which I don't think is cheap (but doable perhaps?). The other problem is this conflicts greatly with multipath and other kinds of interference.
I am referring to subcarrier intensity modulation. If you were to turn the optical power from 100% to 0% and then back to 100% in a sinusoid fashion, you would be modulating the intensity of the output. The diode observes the extremely high optical frequency as DC signal, onto which you are modulating another signal at a lower frequency. e.g. I(t) = cos(f_opticalt)cos(f_modulation*t), where I(t) is intensity vs time, f_optical is extremely high (THz) and f_modulation is the lower modulation frequency (MHz)