• 0 Posts
  • 15 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle
  • How well does clipping the antenna actually work?

    If my FM radio antenna rusts and falls off, my FM radio still works. Reception will be shitty but it’s absolutely still usable for nearby or powerful stations.

    When the GPS antenna inside my much-abused phone came loose, GPS got very unreliable but still often worked in a glitchy way.

    If I clipped the external antenna on a car’s cell modem, would it not be the same way? Based on my experience with those other kinds of antennas I’d expect maybe the manufacturer would lose the ability to track me while driving in remote or mountainous areas, but generally in cities or highways it would still connect. Is it not so?




  • Reddit went absolutely bonkers overboard every year with dumb April Fools posts to the point where it crowded out every real topic and made the site unusable for the day, even back in the early days when it was otherwise very usable.

    I personally never saw that level of bullshit overload anywhere else.

    One of the things I like about Lemmy is that it avoids (somewhat) the everybody-piles-on-same-joke plagues that swept through Reddit. I’m hoping we as a community can keep avoiding them (somewhat)

    Sorry for all you beans and Stör enjoyers. This place is big enough for the both of us but sometimes it does get uncomfortable.



  • Man that video at the end of the view of the axle from underneath while driving — two separate fan enclosures with small sheet-metal straps and fiddly little electronics-case screws, all facing down toward the road and right behind the wheels?

    This is not a place where you put fans, or big openings into the kind of electronics you need fans to cool, if you want them to keep functioning while driving through salty slush and mud.

    Seems perfect for Southern California







  • A token is the word for the base unit of text that an LLM works with. It’s always been that way. The LLM does not directly work with characters; they are collected together into chunks less than a word and this stream of tokens is what the LLM is processing. This is also why the LLMs have such trouble with spelling questions like “how many Rs in raspberry?” — they do not see the individual letters in the first place so they do not know.

    No, the LLMs do not all tokenize the same way. Different tokenizers are (or at least were once) one of the major ways they differed from each other. A simple tokenizer might split words up into one token per syllable but I think they’ve gotten much more complicated than that, now.

    My understanding is very basic and out-of-date.