• 6 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle







  • People think that govt developed = bad. It’s a consideration for sure but if anything govt developed is so hopelessly and inherently compromised then many of the measures discussed here are useless for privacy already because they almost all run through internet, a govt created system. Even TOR. But yet here we are anyway because they are still useful systems.

    Governments pour tons of time money and effort into secure communication, and not for profit, and we can still take advantage of that advancement with some caution.






  • The heads up display isn’t something you see in front of you like most planes. The helmets are the heads up display, like augmented reality.

    There are cameras all over the plane to help you see through the aircraft (see ground targets through the floor, nearby aircraft through your wing). Think of the resolution and bitrate needed to make it useful!

    Just like how an apache gunner can simply look at a target to aim the gun at them you can do the same thing to get missile lock. And if you can’t hit it it’s still marked for every allied plane in the airspace to see. If you are out of missiles but you are tracking an enemy plane miles ahead, you can send the data to an F-15 miles behind you and let their missiles lock and fire from farther than they can engage alone.

    With that in mind the radar is awesome letting it see threats from greater distances than the opposition, with the stealth capabilities good enough to keep them from easily doing the same.

    I’m sure there are other surprises too, but the military obviously wants to keep those a secret



  • “tRuSt tHe sCieNCe!”

    This is a joke of course…well kinda. When science is done well it can change the world. Who would be against that?

    I don’t like the phrase because while the process of science seeks to be as factual and unbiased as possible those in the scientific community are still human. They are fallible, corruptible and can do things for their own personal gain or profit. So to me it could mistakenly misunderstood as “trust science blindly”

    But “Trust the science that is validated by multiple reputable sources” just doesn’t roll off the tongue as nicely



  • I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There’s a joke that reddit is mostly bots. I know there’s some countermeasures to sniff them out but think about it.

    I’ll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advice/recommendations that you can’t really get elsewhere.

    Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.

    And if it’s factually incorrect so what? It was just some kind stranger™ on the internet