• 0 Posts
  • 79 Comments
Joined 3 months ago
cake
Cake day: June 23rd, 2024

help-circle
  • What method do you recommend to [capture the video interlaced, preferrably as losslessly as possible]?

    It’s been a while since I’ve done this but unless you’re recovering the Ark of the Covenant, it should be enough to follow these simple steps: use H.264 in OBS with high bitrate on a fast PC and preferrably using a USB 3.0+ port (even if the capture card is 2.0) to avoid clashing with other devices on the bandwidth-limited 2.0 bus. Check that the output is indeed interlacd. Look at stats/logs to see of any frames are dropped and investigate if it’s just the 59.94 Hz compensation, actual blank sections of tape or some part of the processing chain unable to keep up. Adjust audio levels; you might get better results using your PC’s mic socket rather than the capture card’s audio ADC (most tapes are mono anyway) but make sure to disable auto-gain or else quiet sections will get boosted like crazy, increasing the noise.



  • the composite to HDMI converter has a single switch from 720p to 1080p

    Composite is 480i/60*. That is, 60 times per second a blanking interval occurs, then 240 lines of picture are drawn - either the top (odd) or bottom (even) field. This is neccessary for CRT TVs because a 30Hz refresh rate would cause seizures but drawing all 480 lines 60 times per second would be wasteful. Look it up online for details: if you want videos, I recommend the Television playlist by Technology Connections on YouTube, especially the first video.

    *Technically, the vertical frequency for NTSC is 59.94 Hz (precisely 60000/1001) to avoid interference between color and audio while keeping compatibility with B/W sets. In practice, you should check that the video output is actually at this frequency; if it’s 60 then every 1000th frame will be duplicated - no big deal usually unless this also swaps odd&even fields. No such problem exists for PAL, which was always exactly 50 Hz.

    If the converter only outputs 720p or 1080p (presumably at 60 Hz), all 720/1080 lines are drawn 60 times per second, which means lines are added with some scaling technique, after some kind of deinterlacing happens.

    Deinterlacing is basically a task similar to scaling but with key differences:

    • You only need to extend one dimension and exactly 2×
    • You know what the missing lines looked like 1/60 of a second ago (and 1/60 of a second later), which can be used verbatim or intrapolated from.

    There are various deinterlacing techniques that could be used here:

    • doubling each of the lines for 240p60 video: easiest and best for temporal accuracy but loses half the vertical resolution; good for NES/SNES capture because it’s already 240p60 (using timing tricks to make the CRT draw lines of both fields in identical spots) but most capture devices misinterpret that as 480i60, arbitrarily deciding which of the 240-line frames will be the top field
    • holding the previous two fields for to the next frame for 480p60 video where only every other line changes per frame: good for viewing (this is how playback usually works on progressive-scan monitors) but wasteful for storage, as each field is basically stored twice and compression does not help. Pausing will show two consecutive fields (it will be random whether top or bottom first, depending on the exact frame you pause on) so whatever moved in between will have jagged edges but that’s expected.
    • most common, and likely what your hardware converters do: doing the above but only storing every fully changed frame for 480p30 video: this is terrible for video shot on video cameras because the time difference between fields is not accounted for, and you will see jagged edges for moving objects, or even moiré patterns when scaling. However, this is the best option for PAL film scans because they were shot with 24fps global shutter (and played 4% faster at 25 fps for video purposes) and then scanned interlaced so the top and bottom fields correspond to the same moment in time (as long as the fields are not swapped, which is why capture software usually has that option). For NTSC film scans, 3:2 pulldown is used (each odd frame gets scanned for 3 fields, each even frame for 2 fields to convert 23.976 frames/s to 59.940 fields/s) so this technique needs to be modified to skip the correct one from every 5th field (or average it with the 3rd for noise reduction).
    • computation-heavy: guessing what the missing lines of each field would be using some kind of algorithm for 480p60 video: this is a kind of upscaling technique and as such can produce artifacts but they will be better than what you’d get with the above methods. This will probably yield the best results if you NEED the output to be progressive video with limited bitrate, such as YouTube.
    • the best: NO deinterlacing! Store the video in an interlaced format because that’s what it is, even though software support is not as good (the software support has always been good with players, and video editors have gotten better but still the capture devices/software often insist that interlaced video could be unsupported and avoid it). You can encode it to MPEG-2 AVIs (terrible option for storage though, bad quality/bitrate ratio) and burn them to a DVD (or use a DVD player’s USB port if present) and connect the DVD player’s composite output (or better, component/SCART at 480i) with a CRT TV for the intended way of viewing interlaced video. Nobody knew about LCDs when this video standard was introduced! (OLEDs can technically be driven interlaced but no controller does that in sync with NTSC as far as I know.) The best DVD players for this are from cca 2010 when USB was commonplace but they have no HDMI which could mean there is an internal scaler or advanced framebuffer (I know a device that recompresses its video output before it goes into the HDMI/composite output module: it’s horrible!). Speaking of codecs, some don’t support interlacing anymore (H.265 for example) so be careful.

    Don’t use the converter if it cannot output 480i or at the very least 480p! Scaling should happen during playback, the files should be original resolution. You can also try non-trivial upscaling with some AI tools but still DEFINITELY keep the original resolution file for archival.

    use a [separate] worse quality VCR for cleaning

    I don’t have experience with moldy tapes. It might be a good idea but adds wear; I’d just clean the VCR after every tape if I suspect mold. You’d still need to clean the cleaning VCR after every tape to avoid cross-contamination so it would be

    Is [advanced deinterlacing] possible in OBS?

    Idk, I just keep my files interlaced and stored as high-bitrate H.264 (I don’t have enough computing power to encode sufficiently good bitrate in better codecs). If I wanted deinterlacing, I could process the files with ffmpeg filters or some other tools.


  • I don’t expect newer VCRs to be made, there’s a lot of precise mechanical engineering and the R&D that would need to go into making a professional-grade VCR today does not make financial sense. However, there is an option to refurbish existing ones and capture the magnetic signal as directly as possible. On media such as VHS or LaserDisc, the signal is not quite composite video, as that would require some 6 MHz of bandwidth. Instead, the color subcarrier is remodulated to a way lower frequency and then back to normal for playback. The folks behind ld-decode (a project that takes raw signal from a LaserDisc’s laser pickup and translates it into composite video) and its fork vhs-decode have made software that captures everything the head picks up into a raw file, and then does TBC and chroma decoding to create the best possible video. They also documented what hardware can be used for the capture (usually a firmware-modded Conexant video capture card or a beefy FPGA) and how to connect it to some VCRs’ circuitry.

    Of course, this is quite an over-the-top effort for home tapes, I’d just go with a generic composite capture card that does not deinterlace nor upscale and not bother with TBC.


  • Why a separate VCR for cleaning tapes? It’s enough to clean the heads AFAIK.

    Also, you should definitely not use default deinterlacing techniques for the video, especially not ones built into these generic dongles. Capture it interlaced, preferrably as losslessly as possible, then use deinterlacing software where you can fine-tune the settings if you need to.

    No, TBC most likely cannot be done in software, unless the video features a prominent vertical bar (such as a black border). It depends on the quality you want to reach, look closely and decide if the jitter is acceptable.

    Edit: TBC can obviously be done in software if you have the raw composite or head signal but that is not possible with the capture cards you have.



  • Well, this is how a large part of how carbon credits “work”. You can pay someone to not cut down their tree, for example, and get a certificate in return. However, the demand for wood will likely just get satisfied elsewhere and there is little stopping people from selling multiple carbon credits per tree, or just including trees that would not get cut down anyway.

    In the best case scenario, actual carbon gets stored so that it won’t decompose (like dead trees or other biomass in oxygen, maybe even plastic someday) or burned (like coal that future people can reach); however, that’s energy-intensive (hydrocarbons release energy when turned into oxides and vice versa) and difficult. Obviously, such carbon credits are expensive and they would probably cost an airline as much as fuel for your flight. Sealing an oil reservoir instead of using it, as I suggest, would be the easiest way to effectively accomplish this but oil producers don’t want to miss out on the fields they operate.
    Unless the “carbon neutral” option for your flight ticket is a large percentage of its price, they are probably using dubious carbon credits - in the typical case, they are like saying your crypto mining rig is zero-emission because it’s next to an existing hydroelectric dam. The energy from that could have been used to offset some carbon-intensive production elsewhere (unless all your energy demand is already satisfied by clean electricity and you cannot export, like Iceland some islands).
    At worst, it’s a pure scam that offsets no carbon and is pushed by Big Oil to prevent buyers from considering systemic changes to their carbon-heavy operation.

    Edit: Iceand indeed has an overproduction of clean energy and they use it to extract and export aluminum, which is energy-intensive. Still, as long as there are gas furnaces and combustion engines on the island, there is room for improvement. However, small tropical islands (which cannot host aluminum factories) mostly use solar panels and some storage solution, and computationally heavy tasks are a legitimate use for any excess electricity production.






  • Input devices almost never use USB 3.0. In fact, most manufacturers save money and don’t shield the cable, forcing half-speed USB 1.1, which is enough for all mice and keyboards - less than 50 kb/s of the available 6 Mb/s is required even for 240Hz polling. High-end mice might have USB 3.0 (9 pins instead of 4 in the plug) but there should be no practical difference between 3.0 and 2.0 speeds. The polling rate will most likely be identical and the microsecond difference between how long each takes to transfer the data is likely way lower than lag from the mouse’s wireless connection.

    Just use any USB 2.0 hub, even $2 ones from AliExpress will work the same as high-end ones. Most are sold with 4 ports because that’s what their standard generic chip does. You probably have one lying around or built into the monitor. You’re unlikely to cause interference so just choose any spot with strong signal to the desk area, not necessarily line-of-sight: if the mouse works everywhere within 2 meters from the intended area, then the intended area will have good signal and minimal chance of dropout. The lag or polling rate does not decrease with signal strength unless you count extra nanoseconds the radio waves need to travel.

    The only difference is when you need another port for high-speed applications such as mass storage devices or MTP with your phone, at which point just plug them directly into the PC for max speed.



  • The pattern-seeking brain would be driven crazy trying to predict when the next tick is going to happen, as this pattern is not easy to analyze without tools. Experienced musicians could figure out that the shortest time between beats is half the second-shortest, and perhaps figure it out from there.

    Anyway, you could make a website that simulates this or generate a long YouTube video, send a link to unsuspecting people and see what they think. If you want to be extra sneaky, use rain sound as background and “close-up” recordings of single drops for the beats. If you can’t code, make sound files of all the different possible measures in Audacity and use a media player with seamless playback and naïve shuffle.