OK, I'm joining in on this adventure! I posted on reddit asking about AHD video systems wondering the feasibility of them. Looks like it isn't a totally far-fetched idea!
My thoughts on the AHD system:
Mr.RC-Cam, I commend your progress , I saw the DVR video of the FPV using AHD system, this would be a promising avenue for higher quality on small builds. There is still some work to be done as you pointed out, we need improve error handling a LOT, and allow the signal to degrade gracefully. It looks like it is FM moduated, which has the characteristic of digital where it either works or it doesn't (compare the footage at 53:00 vs the rest of the video). Frequency Modulation also wastes bandwidth which isn't desirable, but it does help with the noise immunity...
I'm wondering if the AHD standard can be modified even more to allow 720i60, I think 60i would be better than 30p as it would have 720p resolution with still image and about half that when moving but with much smoother motion. Deinterlacing filters have gotten pretty good! You can actually extend this concept of analog video compression to Multiple sub-Nyquist sampling encoding or MUSE, which was a failed quasi analog/digital standard that pushes a 1080i videostream down an 8 MHz channel, this should be easy to do with the current analog systems on the market, as you mentioned with the 960H systems!
There are quite a few standards out there: TVI, CVI, AHD, MUSE, then of course the ones that require multiple channels; S-video, component, VGA, etc.
The biggest limitation of analog systems is the difficulty in compressing the datastream meaningfully. That is; removing redundancy. You can look into digtial compression schemes to see how it's achieved, but it adds latency by design, AND by reducing redundancy small errors result in much more severe consequences and frame drops! It's a 2-edged sword!
Of course nowadays we can't ignore the presence of the DJI digital FPV system. But I think there are still some advantages to analog as Joshua Bardwell pointed out in his review of them. I for one take some dislike to it for being proprietary, although it is understandable given how much R&D had to go into developing them. Digital systems like this are going to be inherently complex state-full systems. Just look at VP.9 encoding if you want a taste of just how complicated digital compression is! Complicating things further, it seems the DJI system has variable bit-rate, which requires RSSI calculation and a bi-directional link to allow the goggles to report to the transmitter how much power to push! (or maybe not; maybe DJI has ways of recovering partial data better!)
The other outstanding issue with digital FPV is not so much that it's digital, it's that it after compression, it's a lot less tolerant to data loss. And compression by design requires having a number of buffered frames as decoding any frame requires the previous frames (they are effectively deltas) and future frames. Like in the case of MPEG, if you lose a frame it corrupts every frame to follow because they build on that corrupt data.
RF output / transmission:
This is a very hard topic. There are chips out there that effectively abstract this so it isn't too much of a concern unless you wanted something better, but worth noting a high level overview of different technologies:
AM (or more accurately, DSB-SC) is what I believe is used for NTSC. DSB-AM is quite wasteful, literally HALF the transmitted power is just transmitting a DC bias, which is mixed up to a the RF center frequency. This was well known even in the very early days of radio, it was used anyway cause it's darn easy to demodulate, literally a single diode (envelope detector) is all that's needed! The fact that it's Double Sideband means that if your signal has 5MHz of bandwidth, it takes 10MHz bandwidth once modulated. FM is even worse! Look at Carson's rule to understand the workings of FM. It is more immune to noise. Again, a trade-off between bandwidth, noise immunity, and quality! It's like there's some theoretical limit haha.
Then you have the more advanced methods, like QAM. QAM is still pure analog means of data transmission, it allows you to transmit/receive 2 channels concurrently! an 'I' component and a 'Q' component, which are 90 degrees out of phase with each other which means that changes to one component do not affect the other, they are 100% independent (in theory at least). This allows you to fully utilize the band, so 5MHz baseband takes up 5MHz when mixed up to RF. But now you lack the elegant redundancy of DSB-SC and have more noise susceptibility...
You can take all theses methods, and simply feed them with digital data to turn them into their equivalent digital modulation counterparts. AM becomes OOK or ASK, FM becomes FSK (or variants like GSK MSK, etc.) and QAM is still called QAM but with a number at the end telling you how many symbols are in the constellation diagram. More symbols means more digital data within the same bandwidth, but means also higher noise susceptibility and the need for a cleaner channel. Hence QAM tends to be used mostly for high bandwidth low-loss coaxial data links, like cable internet.
If you want to get REALLY complicated, just look at the voodoo that mobile phone carriers are doing with 3G, 4G, and 5G. One of the hallmark technologies is the use of ultra wide-band communication with time and frequency division multiplexing and orthogonal frequency-division multiplexing (OFDM) which massively improves on problems like multipathing, which is important for cellular service. I suspect 5G with the touted low latency, may enable FPV over 5G and IP! The latest development here are large antanna arrays where you control the phase of the signal to each one to "beam" a signal in a particular direction. You can constuct a number of virtual signals in an FPGA or ASIC unique for each device on the network and optimize how much of a signal goes to each antenna to "beam" the signal in a particular direction! And each user has their specific traffic beamed towards them by putting carefully broadcasting those RF components on each antenna at just the right amplitude and phase to allow maximum reception. It's over my head for sure!
Solving the issue's at hand with AHD:
I recently bought a Xylinx Arty Z7 FPGA/SoC with HDMI outputs, and I used a similar dev board back in my university days. I might attempt to build a decoder for these signals and see. Low latency will hopefully be easy to achieve, just make the VGA output each line of data synchronous to the data from the AHD signal. Hopefully I can get a software/HDL ibrary that also allows compression and writing to a file to the SD card slot.
Another far-fetched goal of mine is to integrate the recording camera and the FPV camera. Having 2 separate cameras just seems redundant. Why not have a camera that outputs a videostream that can be used as your FPV? Many camcorders offer composite or even microHDMI outputs although these are useless for FPV as they are provided only for purposes of focus, framing/composition, and replaying video on the big-screen, all cases where latency is of little concern. Hopefully I can reverse-engineer one of those little MIPI/CSI camera sensors and use my FPGA to record 4K video from them and also because it's an FPGA, I can whip up some HDL that implements a AHD video output!
Things that will take time for me to figure out is how video compression works, reverse-engineering a camera sensor, and learning more about RF stuff.