Jump to content

androidi

Members+
  • Content Count

    4
  • Joined

  • Last visited

Community Reputation

0 Neutral

About androidi

  • Rank
    RC-Cam Visitor
  1. Ok, so I pondered this thing a bit. After digging some practical information, I found the relevant links. Looking at PAL, which has a higher frequency than NTSC, Wikipedia tells about as much as is needed for determining the data speed. – Both PAL and NTSC are irrelevant after this, except considering the modulation capabilities of el cheapo vtxs.. https://en.wikipedia.org/wiki/PAL So we are looking at 8 MHz max frequency, which is achievable quite easily. First hit points to a TI set (from 1994!): http://www.ti.com/lit/an/slwa022/slwa022.pdf Instead of just sending FM, QAM should be a no-brainer: https://www.radio-electronics.com/info/rf-technology-design/quadrature-amplitude-modulation-qam/what-is-qam-tutorial.php https://en.wikipedia.org/wiki/QAM_(television) 64QAM would give a beefy 30Mbps digital bandwidth, which is more than plenty. (Even for several HD cameras :D) I think I can ditch all that ADV-HDTV-chip sheizze, and just focus on pure signals, as HardRock pointed out. CVBS wrapper would have been a nice twist, but is totally unneeded, and is only limiting the creativity. So, the only thing left is to have H.265 output data from a HiSilicon chip, pass it through vtx-vrx pair, and decode it with another chip on the other end. Easy peasy. The problems start to emerge, when signal quality starts to suffer. By downgrading from 64QAM to QPSK/BPSK the transmission speed is lower, but still more than enough (like 5Mbps or something). One other thing to include would be some kind of error-checking / fixing mechanism. RAID striping with parity comes to mind, but it's designed for hard drives. CRC does not help, it only throws invalid data to trash. Ideally the fixing mechanism could reconstruct some useful data from a low-quality signal. There must be hundreds of these around, it's just a matter of digging one up, and implementing the needed bit-fiddling. It would be superb to be able to change the HEVC encoding bitrate and/or resolution on the fly, by e.g. utilising the RSSI from "controller" rx (that 2.4 GHz thingy), via FC board, or something (I'll have to dig the BetaFlight source some more to get a better understanding of how things work there). Resolution change on the fly could be more of a problem, but I'd imagine changing the bitrate would just be some request to encoder process (depending on the implementation, of course). But remember! Digital link for PAL video only takes some hundreds of Kbps, which already is quite "ground-breaking" (for FPV pilots, that is. The rest of the world already did this like 40 years ago. :DDD).
  2. From Finland. I guess it won't matter. But I'm not sure if it's legal to actually create radios of any kind in here without the license. So those devices really are dummies. That's beyond great news! No more Arduino tinkering, yesss... I actually have one custom SAMD21/RTOS-project right now, but in a couple of days I will return to this project and start poking around the CCTV board with the HiSilicon SDK. It's great to have more people interested in doing this "the other way around".
  3. I got myself a CCTV from AliExpress, it was like 30 USD: https://www.aliexpress.com/item/H-265-H-264-3-0MP-2048-1536-IP-StarLight-Camera-Module-Board-3516C-IMX291-FishEye/32822600792.html That camera does not utilize the full possible frame-rate of IMX291, but it's "easy" to start with. I've got the HiSilicon SDK for 3516C and the likes, and another SDK set for 3798M, which is the chip in the DVR board (also about 30 USD): https://www.aliexpress.com/item/8CH-CCTV-H-265-NVR-Board-4K-5MP-4MP-HI3798M-Security-NVR-Module-4CH-5MP-8CH/32810674862.html HiSilicon has even more powerful chips, but I cannot even find the Hi3516C from Mouser and the likes, so it's kind of hard to create hardware-prototypes. But perhaps those can be found from somewhere, AliExpress, Taobao, Dark web? Other thing is MIPI-CSI2 standard. I found some draft from the web, but it would cost like a gazillion dollars to be a member in MIPI organisation, and have full access to CSI-2 specs (the bus used in most cameras). Then there's the H.265 codec. A free(?) alternative is VP9, but since HiSilicon already has the H.265 built-in, why not use it, at least for the tests. I was thinking of utilising the CVBS as a "carrier", to re-use as much of existing hardware as possible. I don't have a HAM license (at least, not yet), so I cannot just blast "anything" from and to the skies. There would also be an advantage to using existing Foxeer/RunCam/Caddx/what-ever FPV cameras with this tech, since packing the data saves bandwidth. Even if one has a link of semi-poor quality, perfectly clear picture could be transmitted back to the goggles / ground station. Analog signals are inherently more vibrant than digital, but can also be contaminated more easily with noise. Also, there is no error-checking and RAID-like recovery options for an analog signal. Digital signal can also be compressed, saving that precious bandwidth. From those FPV-video over WiFi -projects I've studied (e.g. from Github), I started thinking, how the heck does WiFi actually work. I found some university lectures and a lot of other material by googling, also with references and explanations to what actually are the BPSK, QPSK, QAM, QAM4, QAM16, QAM64, and the likes. The foundational thing is, that you've got an antenna. What you do with it, is up to you. The WiFi specs are just the ones people are used to (99.999% of users never even knowing it). Still, nothing keeps from utilising the existing hardware differently. I think – I haven't tested – that those cheap vTX transmitters don't really know what the passing data is. I might be wrong, in which case it wouldn't be an option to choose, whether or not to wrap data in CVBS stream. But, if those devices are dumb enough, they can be utilised far more effectively than an ordinary 100mW WiFi link. In which case AM and some convenient modulation is also completely possible.
  4. I just have to comment on this thread. I'm (too) currently investigating a possibility to have a digital link / HD image on FPV systems. What most of the people overlook, is the channel bandwidth. On 5.8GHz FPV vTX-link, we're looking at about 20 MHz. - What was the link width of a WiFi system again? Yes, the same 20 MHz (some might have MIMO these days, which have 40 or 80 MHz). So the default 20 MHz channel width is more than plenty, since I know, for a fact, that there are millions of Netflix users out there, who have 20 MHz WiFi at home. And it works. So... digital HD image fits well in that 20 MHz band (even several streams at once). Now, once we're over the bandwidth thingy, let's look at the signals. FPV vTX link is "FM" (frquency modulation, e.g. BPSK), but WiFi links are "AM" (amplitude modulated, e.g. QAM64). If, however, we look at the colorburst section of an analog video, it's actually kind of amplitude modulated. So all the vTX transmitters, even cheap ones, are capable of sending at least "kind of" AM. The downside of colorburst in CVBS signal is that it's very sensitive -> most often one looses the color info before your whole picture goes blank. However what this all sounded like to me as a developer, is that one could send something else than raw CVBS data through the vTX link. So I went on and bought a cheap Sony Starvis (IMX291) CCTV camera and a DVR board. The camera hardware is capable of creating a H.265 stream (and DVR capable of decoding it). The chip on the cam is Hi3516C V300, and these very common CCTV chips (along with other HiSilicon chips of the same family) have been hacked and utilised before. – Does someone else recall the Mirai botnet? Now, here comes the tricky part.. The H.265 signal must be "wrapped" in an "analog CVBS shell" (along with checksums, etc.), and sent via ordinary, cheap vTX transmitter (because, why not , after all, expensive things are always expensive). On the receiving end, this "non-picture picture" must be decoded by some wizardry. I've got ADV7181C chips for this job, which rip out RGB-data (ITU-BT.656) from CVBS data, and the "real" bitstream can be obtained by interpreting the the decoded RGB-values. After we have this real data (reconstructed H.265 stream from RGB-data), it must be fed back to the system somehow. For this, I have the DVR. It's capable of receiving and decoding H.265 stream (Hi3798M chip). From the DVR I get out both CVBS and HDMI. There are some advantages to analog signal over purely digital WiFi. One obvious thing to note is, that CVBS signal is "running" all the time. There are 625 visible horizontal lines (PAL), which can be utilised individually. Traditionally an ethernet package is something like 1500 bytes, but there are a gazillion other protocols too (e.g. MODBUS, CAN, ..), which have a totally different package length (even variable lengths). So, why not pick a convenient timing and package length of one CVBS line? The CVBS signal lines give 18750 (625 x 30) timed packages a sec. One data package can be e.g. 704 bytes, if each pixel on the line represents one byte, which means there must be 256 separable shades of gray. This might be unrealistic, but it would offer a whopping 13.2 Mbps of digital bandwidth, by just using the luma values of CVBS data! Recommendation for 1280x720 video is something between 1-2 Mbps, so in an ideal case there would be plenty of digital bandwidth. In addition to the b/w luma channel, there's the chroma, or colorburst. Colorburst is a bit more challenging to utilise, because it's a lot more sensitive, and it's resolution is only half of the luma (i.e. 352x312, or something like that..). This would, however, give some additional bandwidth e.g. for checksums, replay packages, and things like that. I have only begun testing this setup, and I'm still waiting for some parts to arrive, but I'm already able to generate CVBS luma data from an Arduino (using TVout module), sending it via vTX link, and translating pixel data to real bytestream on receiving end. Next step is producing color signal with Arduino Due, and getting more data. Third step is to hack the Hi3516C completely to get the H.265 data out and process that into a CVBS (with e.g. an FPGA or Cortex-M7 or something like that). Using a digital video stream there's always the problem of frames. So far each and every practical video codec I've come across, wants to study a full frame at a time. Where as analog signal is "just transmitted". In analog world, it's just: read capacitor values from CCD / CMOS, pipe the voltages to a vTX transmitter, receive those voltages on the receiving end (vRX), convert these values to digital, and pipe them via LVDS to your display. Using a fast CMOS (e.g. Sony Starvis IMX291), even the speeds of 120 fps are achievable (8ms latency), but might not fit in the channel (with checksums 'n' stuff). 60 fps might be doable, which would give a frame latency of 17 ms + encode, wrap, transmit, unwrap, decode and LVDS output. A fast CMOS and a fast encoder might shave off some miliseconds, like newer Qualcomm Snapdragons with modern mobile phone oriented CMOS cameras (120-240 fps is common nowadays). This will, however, increase the cost. Ideally the whole setup would be <100 EUR/USD.
×
×
  • Create New...