Last 7 Days
This stream auto-updates
- Last week
Umm, decoding demodulated NTSC even in an FPGA may be a bit more of a hassle than I expected. One challenge is going to be all the DSP filtering that needs to be done to separate the luma signal from the chroma subcarrier and modulation. Decoding color information will require implementing a phase locked loop within the FPGA to lock on to the colorburst, and require analog means to decode it. GNU radio and the SDR community might give me some insight this all works. My first prototype will probably be black and white, and may even have the dot crawl all over everything lol! It looks like an uncompressed digital stream of from something like HD-SDI might actually be easier and "better" as the signal encoding is more "naive" and easy to decode, although finding cheap hackable security cameras that use digital signals over coax is proving to be more challenging than expected. Then again, with uncompressed 720p30 you effectively need more than double the bandwidth, it's just how it goes. Intra-frame compression will help some. I've considered ATSC, too. ATSC occupies the same bandwidth as NTSC, and even uses the same AM modulation! This means you should be able to continue using current 5.8GHz transmitters, a huge plus! But ATSC does rely on video compression algorithms, which makes it a stateful system and bit errors take some time to resolve. (like if you lose too much data within a frame to fully reconstruct, that bad data must be used to calculate future frames and adjacent pixels, leading to the classic digital corruption look with several blocks of bad pixels being shifted around until a full I-frame is sent to fix accumulated errors) So it stands to be scene how flyable a system like this would be when the whole image gets corrupted for up to a couple seconds. I suspect the best approach is to use ODFM modulation as it was engineered specifically to be resilient against multipathing and doppler shift, combined with a video compression algorithm that relies mostly on intraframe compression (to prevent 1 corrupt frame from screwing up the rest) and can tolerate errors elegantly (such that the result is simply increased noise, or reduced resolution rather than missing blocks). ODFM is outside my scope of knowledge, it would require a PhD in RF electronics to do it right! Unless there is an IC that allows ODFM communication over 5.8GHz band. If anyone knows of a digital video transmission that meets these requirements, can be transmitted over a single channel, and occupies the same bandwidth of NTSC, I'm all ears! So far the only thing that comes to mind is ATSC and EX-SDI.
OK, I'm joining in on this adventure! I posted on reddit asking about AHD video systems wondering the feasibility of them. Looks like it isn't a totally far-fetched idea! My thoughts on the AHD system: Mr.RC-Cam, I commend your progress , I saw the DVR video of the FPV using AHD system, this would be a promising avenue for higher quality on small builds. There is still some work to be done as you pointed out, we need improve error handling a LOT, and allow the signal to degrade gracefully. It looks like it is FM moduated, which has the characteristic of digital where it either works or it doesn't (compare the footage at 53:00 vs the rest of the video). Frequency Modulation also wastes bandwidth which isn't desirable, but it does help with the noise immunity... I'm wondering if the AHD standard can be modified even more to allow 720i60, I think 60i would be better than 30p as it would have 720p resolution with still image and about half that when moving but with much smoother motion. Deinterlacing filters have gotten pretty good! You can actually extend this concept of analog video compression to Multiple sub-Nyquist sampling encoding or MUSE, which was a failed quasi analog/digital standard that pushes a 1080i videostream down an 8 MHz channel, this should be easy to do with the current analog systems on the market, as you mentioned with the 960H systems! There are quite a few standards out there: TVI, CVI, AHD, MUSE, then of course the ones that require multiple channels; S-video, component, VGA, etc. The biggest limitation of analog systems is the difficulty in compressing the datastream meaningfully. That is; removing redundancy. You can look into digtial compression schemes to see how it's achieved, but it adds latency by design, AND by reducing redundancy small errors result in much more severe consequences and frame drops! It's a 2-edged sword! Digital FPV: Of course nowadays we can't ignore the presence of the DJI digital FPV system. But I think there are still some advantages to analog as Joshua Bardwell pointed out in his review of them. I for one take some dislike to it for being proprietary, although it is understandable given how much R&D had to go into developing them. Digital systems like this are going to be inherently complex state-full systems. Just look at VP.9 encoding if you want a taste of just how complicated digital compression is! Complicating things further, it seems the DJI system has variable bit-rate, which requires RSSI calculation and a bi-directional link to allow the goggles to report to the transmitter how much power to push! (or maybe not; maybe DJI has ways of recovering partial data better!) The other outstanding issue with digital FPV is not so much that it's digital, it's that it after compression, it's a lot less tolerant to data loss. And compression by design requires having a number of buffered frames as decoding any frame requires the previous frames (they are effectively deltas) and future frames. Like in the case of MPEG, if you lose a frame it corrupts every frame to follow because they build on that corrupt data. RF output / transmission: This is a very hard topic. There are chips out there that effectively abstract this so it isn't too much of a concern unless you wanted something better, but worth noting a high level overview of different technologies: AM (or more accurately, DSB-SC) is what I believe is used for NTSC. DSB-AM is quite wasteful, literally HALF the transmitted power is just transmitting a DC bias, which is mixed up to a the RF center frequency. This was well known even in the very early days of radio, it was used anyway cause it's darn easy to demodulate, literally a single diode (envelope detector) is all that's needed! The fact that it's Double Sideband means that if your signal has 5MHz of bandwidth, it takes 10MHz bandwidth once modulated. FM is even worse! Look at Carson's rule to understand the workings of FM. It is more immune to noise. Again, a trade-off between bandwidth, noise immunity, and quality! It's like there's some theoretical limit haha. Then you have the more advanced methods, like QAM. QAM is still pure analog means of data transmission, it allows you to transmit/receive 2 channels concurrently! an 'I' component and a 'Q' component, which are 90 degrees out of phase with each other which means that changes to one component do not affect the other, they are 100% independent (in theory at least). This allows you to fully utilize the band, so 5MHz baseband takes up 5MHz when mixed up to RF. But now you lack the elegant redundancy of DSB-SC and have more noise susceptibility... You can take all theses methods, and simply feed them with digital data to turn them into their equivalent digital modulation counterparts. AM becomes OOK or ASK, FM becomes FSK (or variants like GSK MSK, etc.) and QAM is still called QAM but with a number at the end telling you how many symbols are in the constellation diagram. More symbols means more digital data within the same bandwidth, but means also higher noise susceptibility and the need for a cleaner channel. Hence QAM tends to be used mostly for high bandwidth low-loss coaxial data links, like cable internet. If you want to get REALLY complicated, just look at the voodoo that mobile phone carriers are doing with 3G, 4G, and 5G. One of the hallmark technologies is the use of ultra wide-band communication with time and frequency division multiplexing and orthogonal frequency-division multiplexing (OFDM) which massively improves on problems like multipathing, which is important for cellular service. I suspect 5G with the touted low latency, may enable FPV over 5G and IP! The latest development here are large antanna arrays where you control the phase of the signal to each one to "beam" a signal in a particular direction. You can constuct a number of virtual signals in an FPGA or ASIC unique for each device on the network and optimize how much of a signal goes to each antenna to "beam" the signal in a particular direction! And each user has their specific traffic beamed towards them by putting carefully broadcasting those RF components on each antenna at just the right amplitude and phase to allow maximum reception. It's over my head for sure! Solving the issue's at hand with AHD: I recently bought a Xylinx Arty Z7 FPGA/SoC with HDMI outputs, and I used a similar dev board back in my university days. I might attempt to build a decoder for these signals and see. Low latency will hopefully be easy to achieve, just make the VGA output each line of data synchronous to the data from the AHD signal. Hopefully I can get a software/HDL ibrary that also allows compression and writing to a file to the SD card slot. Another far-fetched goal of mine is to integrate the recording camera and the FPV camera. Having 2 separate cameras just seems redundant. Why not have a camera that outputs a videostream that can be used as your FPV? Many camcorders offer composite or even microHDMI outputs although these are useless for FPV as they are provided only for purposes of focus, framing/composition, and replaying video on the big-screen, all cases where latency is of little concern. Hopefully I can reverse-engineer one of those little MIPI/CSI camera sensors and use my FPGA to record 4K video from them and also because it's an FPGA, I can whip up some HDL that implements a AHD video output! Things that will take time for me to figure out is how video compression works, reverse-engineering a camera sensor, and learning more about RF stuff.