This reminds me of this Cyanide and Happiness short: https://youtube.com/watch?v=W09ltuxt3rI
Yoko, Shinobu ni, eto… 🤔
עַם יִשְׂרָאֵל חַי Slava Ukraini 🇺🇦 ❤️ 🇮🇱
This reminds me of this Cyanide and Happiness short: https://youtube.com/watch?v=W09ltuxt3rI
There’s also group C which I was part of, you just say that you just pooped or scratch your butt whenever they ask you to load/unload and they’ll immediately offer to do that for you instead.
It still works using just a web browser. Some might just prefer a native app, which Google is currently rolling out.
I love it, I use it on all of my devices at home and it works flawlessly.
Me see cat, me upvote 🐈
I don’t have a Xiaomi tablet but you could try what has been suggested in this thread: https://old.reddit.com/r/miui/comments/18tz52u/how_to_remove_this_3_dots_new_in_hyperos_mi_pad_6/
This would be a meme by itself:
lacks some cheese IMO
Let them fight among themselves and prove time and time again that patents are idiotic and hinder innovation.
I think it’s already removed? I checked by sorting with New and there’s nothing right now, unless you mean another community?
Yup, they already forced Google to announce that they’ll add such a choice screen for the search engine and web browser on Android: https://www.neowin.net/news/google-will-add-new-search-and-browser-choice-screens-for-android-phones-in-europe/
It’s only a matter of time before Microsoft does so too.
ollama should be much easier to setup!
ROCm is decent right now, I can do deep learning stuff and CUDA programming with it with an AMD APU. However, ollama doesn’t work out-of-the-box yet with APUs, but users seem to say that it works with dedicated AMD GPUs.
As for Mixtral8x7b, I couldn’t run it on a system with 32GB of RAM and an RTX 2070S with 8GB of VRAM, I’ll probably try with another system soon [EDIT: I actually got the default version (mixtral:instruct) running with 32GB of RAM and 8GB of VRAM (RTX 2070S).] That same system also runs CodeLlama-34B fine.
So far I’m happy with Mistral 7b, it’s extremely fast on my RTX 2070S, and it’s not really slow when running in CPU-mode on an AMD Ryzen 7. Its speed is okayish (~1 token/sec) when I try it in CPU-mode on an old Thinkpad T480 with an 8th gen i5 CPU.
[cites mondoweiss]
Their MBFC rating: https://mediabiasfactcheck.com/mondoweiss/
Even Wikipedia consider that Nazi Hamas outlet biased and opinionated: https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources
Be better.
PSA: give open-source LLMs a try folks. If you’re on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc… Obviously it’s faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.
You can combine that with a UI like ollama-webui or a text-based UI like oterm.
Hmm I don’t think it’s because of that feature, because it only runs when you explicitly ask it to translate a page for you. You should probably check your extensions, see if you have some redundant ones (a mistake people make is use multiple ad-blockers/anti-trackers, when just uBlock Origin + Firefox’s defaults are usually good enough).
Yup, Firefox has it: https://browser.mt/ (it’s now a native part of Firefox)
Microsoft really wants someone to remind it of these days:
In the 2000s we had AdSense. So now we’re getting… AISense?