hmm doesn’t seem to work for me.
Only thing that works is to manually re-select the keyboard among my keyboard selection/input method.
hmm doesn’t seem to work for me.
Only thing that works is to manually re-select the keyboard among my keyboard selection/input method.
This has been driving me crazy.
I’m using a Pixel with GrapheneOS and the FUTO Keyboard.
Slight workaround for everyone on Android: Click the keyboard selector at the bottom right of your screen and just select the already active keyboard. This will temporarily fix it until you start to type up a different comment.
When you can get ticketed for speeding while your car is on the back of a tow truck:
https://www.the-sun.com/motors/11008328/photo-towing-van-speeding-ticket-evidence/
Or a red light traffic ticket when your car was stolen:
https://abc7chicago.com/chicago-red-light-ticket-camera-illinois-car-stolen-theft/11677595/
And the police/courts won’t help you because it’s a problem from the private company running the cameras… I think we can see where some sort of AI backed camera network is headed.
A bandaid to fix this might be to setup an easy way for someone to dispute the charge. For every day that it takes the company to review the dispute, they would need to pay back the accused the same amount that they are charging them (with a minimum of paying them back twice the amount of the fine).
Even then, I’d rather cameras not be used in this way at all.
Just reported by Mohamed Aruham on Twitter
The oldest tweets I could find that actually started reporting this are from ~16 days ago.
https://x.com/Piotrdotcom/status/1829126494574067992
They reference a page here that was posted on Aug 29th.
Depends on what you’re using.
With local models you use something called a “negative prompt” to exclude anything that you don’t want in the image.
If you really want this to work, you would have to train/fine tune a model by feeding it a bunch of images that show that person’s handwriting.
if you’re just asking ChatGPT to do this for you then you’re doing it wrong.
Yeah until the cops pull you over and take your cash under civil asset forfeiture because it’s “suspicious that you have so much cash on hand”.
The features you miss out on would be direct deposit from checks and app notifications (usually there are a few that you want enabled but are only available through the app).
Good luck when banking apps start doing this.
Well, I don’t think anyone is really sure just how exactly time travel can mess with stuff. I would probably take a page from some time travel movie I saw. I would want to avoid any sort of temporal paradox and avoid too many changes.
So, I would probably remove myself from the equation as much as possible. Go to a hotel or somewhere where I can avoid accidentally running into anyone I might know. Leave all electronics behind, but take a book or something. Spend the whole day avoiding TV/News/People and just reading or work on perfecting a skill. At the end of the day I would call up a broker from the hotel room and find out which stock had the greatest percentage gain that day. Just enough information for one good trade.
Then I would go back to that morning, buy up a ton of that stock, live out life normally, then sell the stock at the end of the day.
Rinse and repeat for a short time.
I would absolutely avoid something like winning the lottery, but for those of you who would use time travel to win the lottery, you might want to follow the advice from this comment here: https://old.reddit.com/r/AskReddit/comments/24vo34/whats_the_happiest_5word_sentence_you_could_hear/chb38xf/
I just want to be able to set alarms with their calendar app (where it currently only sends notifications).
Ok, but the most important part of that research paper is published on the github repository, which explains how to provide audio data and text data to recreate any STT model in the same way that they have done.
See the “Approach” section of the github repository: https://github.com/openai/whisper?tab=readme-ov-file#approach
And the Traning Data section of their github: https://github.com/openai/whisper/blob/main/model-card.md#training-data
With this you don’t really need to use the paper hosted on arxiv, you have enough information on how to train/modify the model.
There are guides on how to Finetune the model yourself: https://huggingface.co/blog/fine-tune-whisper
Which, from what I understand on the link to the OSAID, is exactly what they are asking for. The ability to retrain/finetune a model fits this definition very well:
The preferred form of making modifications to a machine-learning system is:
- Data information […]
- Code […]
- Weights […]
All 3 of those have been provided.
I don’t understand. What’s missing from the code, model, and weights provided to make this “open source” by the definition of your first link? it seems to meet all of those requirements.
As for the OSAID, the exact training dataset is not required, per your quote, they just need to provide enough information that someone else could train the model using a “similar dataset”.
I did a quick check on the license for Whisper:
Whisper’s code and model weights are released under the MIT License. See LICENSE for further details.
So that definitely meets the Open Source Definition on your first link.
And it looks like it also meets the definition of open source as per your second link.
Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
The STT (speech to text) model that they created is open source (Whisper) as well as a few others:
They also gave him a fake line to deliver and didn’t reveal that Darth Vader was actually Luke’s father during the filming of that scene: https://www.soundandvision.com/news/100104hamill/
It’s such a great moment! The fake line that was put in there just to try and keep the secret was “You don’t know the truth: Obi-Wan killed your father!” But as much as I enjoyed leaking false information, it was a wonderfully hard secret to keep because (Irvin) Kershner, the director, brought me aside and said “Now I know this, and George knows this, and now you’re going to know this, but if you tell anybody, and that means Carrie or Harrison, or anybody, we’re going to know who it is because we know who knows.”. -Mark Hamill
I initially think this same thing every time I see someone mention MTG on here, glad I’m not the only one.
I don’t think this is specifically an “AI” problem as much as it’s a privacy issue with the way companies are buying and selling our info for targeted advertising. These models are definitely enabling them to do more with the data that they have as well as to collect more information from us in new ways.
Yeah, the other thing I could see happening is a similar tactic used by scammers where they use Mules who pick up mail from various Airbnbs throughout whatever country, but this would definitely limit most bot operations… Unless some organization specializes in this and just offers some service to create a bunch of accounts for anyone willing to pay.
Also, how many accounts would you limit to a single address, and how long would you lock up an address before it could be used again (given that people do move around from time to time).
edit:typo.
Probably “Trap Adventure 2”.
Imagine an old Mario game where Bowser has the most rediculous traps set up. You need to memorize all of the trap locations as well as have the coordination to tip-toe around them to survive.
https://www.youtube.com/watch?v=_nW9k6k1I3k