Ah that must be it sorry. I thought they had decorelated phone numbers and IDs
Ah that must be it sorry. I thought they had decorelated phone numbers and IDs
Groups have an encryption key that I guess you receive from other members upon joining.
Spaces is an underused feature that I hope see gain more traction! It makes Matrix a credible competitor to Slack and Discord
Not really, have used it for years like that. But you need to set it up initially on your phone. The newish feature (less than a year) is that I think they do not require a phone number to set up a new account.
That’s really interesting! It shows which communities share users. I am part of jlai.lu, a french-speaking community that is relatively isolated by also slrpnk.net that seems very spread out!
Would it make sense to compute the standard deviation of each instance’s communities? It would give an idea of which are islands and which are more extended. Not sure if it makes sense to compute it more on 2 dimensions or on the original 21934 though.
I saw a year ago a comment I often think about, which is that India’s economy, where a lot of call center and remote workers are, is “token-based”. LLMs are going to hurt their labor but they are also the best placed to profit off LLMs, having already many established consumers.
deleted by creator
Install text-generation-webui, check their “whisper stt” option, and you can talk with a computer. As a non native I prefer to read the english output than listen to it but they do provide TTS as well.
It is called finetuning. I haven’t tried it but oobagooba’s text-generation-webui has a tab to do it and I believe it is pretty straightforward.
Fine tune a base model on your dataset and then tou will then need to format your prompt in the way your AIM logs are organized. e.g. you will need to add “<ch00f>” add the end of your text completion task. It will complete it in the way it learnt it.
If you don’t have a the GPU for it, many companies offer fine-tuning as a service like Mistral
It is llama3-8B so it is not out of question but I am not sure how much memory you would need to really go to 1M context window. They use ring attention to achieve high context window, which I am unfamiliar with but that seems to lower greatly the memory requirements.
To actually read how they did it, here is there model page: https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k
Approach:
- meta-llama/Meta-Llama-3-8B-Instruct as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data
For training data, we generate long contexts by augmenting SlimPajama. We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2].
“Theft” is actually legal. Sharing (what they call “piracy”) is not. How about getting the fucking copyright reform that we should have done two decades ago?
OpenAI should be fine. They are leaders but there are plenty of competitors.
Microsoft is in a much more dominant situation and will have to argue that Google competes with them, which is true but may be hard to sell given the fact that I dont think Google offers its TPU services to any other company.
NVidia is in a situation of monopoly. For them it will be hard to argue otherwise. AMD is simply not there, no one using it.
And this is why research is going in another direction: smaller models which allow easier experiments.
I am pretty sure that there are ASIC being put in production as we speak with Whisper embeded. Expect a 4 dollars chip to add voice recognition and a basic LLM to any appliance.
Also, as a side effect, we just solve speech recognition. In a year or two, speaking to machines will be the default interface.
There is a company-wide demotivation plague at Google. Don’t blame middle manager, it extends to the top.
I use it almost daily.
It does produce good code. It does not reliably produce good code. I am a programmer, it makes my job 10x faster and I just have to fix a few bugs in the code it usually generates. Over time, I learned what it is good at (UI code, converting things, boilerplate) and what it struggles with (anything involving newer tech, algorithmic understanding, etc.)
I often refer to it as my intern: It acts like an academically trained, not particularly competent, but very motivated, fast typing intern.
But then I am also working on the field. Prompting it correctly is too often dismissed as a skill (I used to dismiss it too). It needs more understanding than people give it credit for.
I think that like many IT tech it will go from being a dev tool to everyday tool gradually.
All the pieces of the puzzle to be able to control a computer by voice using only natural language are there. You don’t realize how big it is. Companies haven’t assembled it yet because it is actually harder to monetize on it than code it. I think probably Apple is in the best position for it. Microsoft is going to attempt and will fail like usual and Google will probably put a half-assed attempt at it. I’ll personally go for the open source version of it.
Damn I want to read it but it is from the only two accounts I muted (for different reasons)
EDIT: God the sewer when you unblock Musk’s account! I am never doing it again. Why do people talk over this stupidly noisy channel instead of having a threaded discussion like civilized great apes?
Good to know, thanks!