Bluesky, a trendy rival to X, finally opens to the public::undefined
The guy who started Bluesky was the same Twitter co-founder who push for Twitter to sell out. Thanks but no thanks. I’ll stick with Mastodon. It’s getting real comfy in there now.
I’m really happy with Mastodon. I don’t plan on going on bluesky.
What is X?
I feel like that’s a conversation you should have had with your parents by now.
“When a failing social media platform and a billionaire narcissist love each other very much…”
We’re going to send X to a farm upstate.
Really soft core porn. You need to get into the triple Xs to get to the good stuff.
I’m glad we can rely on @[email protected] for solid words of advice on this subject.
deleted by creator
The one who’s gonna give it to ya.
But do they deliver to ya?
X Window System (X11, or simply X) is a windowing system for bitmap displays, common on Unix-like operating systems.
It’s weird that this post called it by the short name. The full name, as you typically see in articles, is “X (formerly Twitter)”.
It’s kinda like “The Artist Formerly Known As Prince”. A few places tried to call him “The Artist”, but no one ever knew what that meant.
“The formerly successful website known as Twitter”
A letter in the alphabet.
no thanks 👍
How can a website be trendy if nobody could use it until now?
You could use it, but only by invitation.
Nah
This is the best summary I could come up with:
Underneath, however, the company is building what Graber calls “an open, decentralized protocol” — a software system that allows developers and users to create their own versions of the social network, with their own rules and algorithms.
Savvy social media users begged one another for “invite codes” to join the fledgling network, whose quirky first adopters gave it a vibe that some likened to the early days of Twitter.
But with fewer than a dozen employees at the time, Graber put off a public launch, fearing that it would force the company to spend all its resources on maintaining and moderating the Bluesky network rather than building out the underlying “decentralized” system.
Rose Wang, who oversees operations and strategy for Bluesky, said its goal is to combine the ease of use and shared experience of closed platforms like X and Threads with the user choice and openness of systems like Mastodon’s.
Mike Masnick, editor of the blog Techdirt and a longtime tech analyst, has followed Bluesky’s progress from the start, after a paper he wrote helped to inspire Dorsey to create the project.
Amy Zhang, a professor at University of Washington’s Allen School of Computer Science & Engineering, has been researching Bluesky to study how users respond when given options to control their feeds and moderation systems.
The original article contains 1,180 words, the summary contains 217 words. Saved 82%. I’m a bot and I’m open source!
I got an invite to join last year and signed up to test it out.
Felt like there was a lot less people and a lot less content on it than Mastadon.
Unless the users/content now really starts to take off, there’s not enough on there to make it interesting.
Every new social media site will start out like that, whether the platform itself is amazing or another corporate shithole.
Put another way, the hype mechanism of “invite-only” stopped bringing enough hype to justify it.
deleted by creator
Nostr is the way. I think it’s going to end up with way more adoption than mastodon or bluesky. I wrote a post comparing nostr vs mastodon if anyone is curious. https://lemmy.ml/post/11570081
ActivityPub is a w3c standard, which IMO is a big plus over nostr which doesn’t have an established independent steward for it.
Also isn’t there the thing where users can’t really be banned on nostr? I’m not sure where I read that, but that’s going to kill any mass adoption if that’s the case.
Sounds like somebody gave you some incorrect information re: banning.
- You don’t need a w3c standard to have a protocol that is open source and used globally, it’s just one way to go about that. You can also have standards which are not made through w3c but are made through some other governance body, or you can have standards where the standard just kind of evolves from a bunch of different devs trying different versions of things until there’s one main way which floats to the top since everybody prefers it. Nostr has the NIP (Nostr improvement proposal) process which has been used to make standards for everything from video streaming to calendar events/invites.
- Relays on nostr, which are the equivalent to instances in ActivityPub/mastodon/lemmy can set their own moderation policies, defederate from other relays, etc all the same as in ActivityPub. The moderation abilities are the same. This means relays can choose what content they allow and ban users/topics/content from other relays, etc. The key difference is that you are by default connected to multiple relays. So if your relay blocks a user you really want to follow, you can keep following that user and see them in your feed, they just don’t show up for other users on that relay. If a relay blocks you, you can’t post content to that relay. So you get the best of both worlds: relays have curated, moderated public squares with trending hashtags and tweets while not reducing your ability to choose who to follow and who can follow you.
- Identity portability is another key feature: if your instance goes down, you don’t lose all your DMs, followers, etc.
I see what you’re saying about it not needing a standards body, and of course that can work fine, but for me it’s an advantage that AP is maintained by a body independent from any specific implementation. An equivalent would be if the AP spec was defined by the Mastodon devs and community—not a bad thing, just not as good in my mind.
The relays thing I think was what the unable to really ban comes from. Are there moderation tools to propagate bans across relays quickly? Does nostr have the same issues as with lemmy instances where an admin abandons the relay and it gets overrun with shit? Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.
The user portability aspect is the main advantage of it that I can see, and it looks like a pretty clever solution to the issue. Though personally speaking, I only really care about my subscription list, which I sync between two accounts already using my lemmy client. I understand some people might care more about the other stuff though (particularly on microblog platforms)
Before we get into the weeds here, let’s start with an important basic premise: Moderation ability, at a protocol level, from an instance/relay admin perspective in nostr and AP is identical.
Are there moderation tools to propagate bans across relays quickly?
Relay operators can share ban lists like they do in AP. Relay operators can only directly control their own relay, not other relays. I don’t know the ins-and-outs of how the interface on the admin side looks, but at a protocol level, AP and Nostr offer the same abilities.
Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.
Relays sharing ban lists help can solve this problem. I would argue that we don’t want to give that power (to ban a user from the entire network) to a single relay admin or even a couple relay admins (since anybody can be a relay admin), so broad consensus of some form needs to exist OR sets of relays can form their own little networks of trust where they will automatically trust a ban from other admins in that network. A relay admin doesn’t need to be able to ban somebody from the entire network if they simply disagree with that user’s post, they can just ban the user on their own relay. There is value in having public squares with varying degrees of moderation, among other reasons, because laws about what kind of speech are acceptable vary country by country. There is value in having mainstream platforms which refuse to host some kinds of content and having that be a different moderation policy than the one used by the government, for example. Remember that legality and morality are not the same and that there are differences in what is illegal vs illegal in different jurisdictions. We don’t want the legal standards of Russia or China to the legal standards the entire network has to follow.
If the user is doing something which is very illegal, which I believe you are referring to, that is a job for law enforcement. Neutral networks like the internet are traditionally policed “at the edges”. We don’t have gmail proactively filtering for objectionable or illegal content because of the consequences that come from that privacy invasion, false positives, additional computational load, reducing reliability of sending/receive between email carriers, etc. Comcast is not inspecting packets as they fly through their network at a the speed of light, delaying them, and determining if they should be passed or not. It’s the internet, they just pass them through. Instead, we say “this is an open, neutral network and if you break the law, LEO will deal with it”.
Fair play, regarding the tooling being there then, I had the impression it wasn’t even possible currently. I guess I’d now wonder how ubiquitous its usage is.
My concern with your second part is that law enforcement would not be able to quickly deal with the issue and in the case of an abandoned relay, could take a fair few days or weeks before any action is taken. The problems with such illegal content is that in many places even unwittingly having it in your browser cache would put you massively at risk—it needs to be removed and the user prevented from continuing as immediately as possible, anything else puts the people using the network at risk. If such a risk exists, it’s going to put most people off (and entirely understandably). I know I avoided browsing lemmy for a fair while when the problem here was still being figured out, and I thankfully never saw anything, but I’m still weary of browsing on my lunch break at work for example.
Also FWIW, I think Google does scan emails and drive for this stuff, I think all US based social networks have an obligation to do so also, IIRC, but I might not be 100% correct on that.
There is no “delete a user from the internet” button. It doesn’t exist. Even if a single admin could ban a user from entire network, which is giving immense amount of power to any admin, all that user has to do is make a new account to get around it. That’s true for Nostr, AP, Twitter, Facebook, E-mail, etc. This is why spam exists and will always exist. AP or nostr or whoever isn’t going to solve spam or abuse of online services, the best we can do it mitigate the bulk of it. Relays and instances can share ban lists in nostr or AP, that can be automated, that is the way to mitigate the problem. There is, however, a “delete a person from society” button we can press, and that is LEOs job. That, conveniently, also deletes them from the internet. It’s just not a button we trust anybody but government to press. We do have a “delete a user from most of AP/Nostr” button in the form of shared blocklists.
As we get stronger and stronger anti-spam/anti-abuse measures, we make it harder and harder to join and participate in networks like the internet. This isn’t actually a problem for spammers, they have a financial incentive, so they can pay people to fill out captchas and do SMS verifications and whatever else they need to do. All we do by increasing the cost to spam is change that kinds of spam are profitable to send. Other abuse of services that isn’t spam have their own intrinsic motivations that may outweigh the cost associated with making new accounts. At a certain level of anti-spam mitigation, you end up hurting end users more than spammers. A captcha and e-mail verification blocks like 90% of spam attempts and is a very small barrier for users. But even that has accessibility implications. Requiring them to receive an SMS? An additional 10% but now you’ve excluded people who don’t have their own cell phone or use a VoIP provider. You’ve made it more dangerous for people to use your service to seek help for things like addiction, domestic abuse, etc as their partner or family member may share the same phone. You’ve made it harder to engage in dissent against the government in authoritarian regimes. You’ve also made it much more difficult to run a relay, since running a relay now requires access to an SMS service, payment for that SMS service, etc. Require them to receive a letter in the mail? An additional 10% but now you’ve excluded people who don’t have a stable address or mail access, etc. Plus now it takes a week to sign up for your website and that’s even getting into apartment numbers and the complications you’d face there. For a listing to be placed on Google Maps, maybe a letter in the mail is a reasonable hurdle to have, after all, Google only wants to list businesses which have a physical address. For posting to twitter? It’s pretty ludicrous.
I generally trust relay admins to make moderation decisions, otherwise I wouldn’t be on their instance or relay on the first place. And my trust becomes extended to other admins they work with and share ban lists with. And that’s fine. But remember that any person with any set of motivations can be a relay or instance admin. That person could be the very troll we are trying to prevent with this anti-spam or anti-abuse measures. What I don’t trust is any random person on the internet being able to make moderation decisions for the entire internet. Which means that any approach to bans would need to be federated and built on mutual trust between operators.
Isn’t nostr something something Bitcoin?
It has an optional built-in tipping function where you can tip users (and receive tips) if you like their posts. Just like reddit had. Pretty cool imo but not required to use the platform.
It is still rather cringe it is joined with Bitcoin at the hip, it even uses secpk1.
Edit: JFK man, https://lemmy.ml/post/11247396
Worth mentioning here that Lemmy itself accepts donations in Bitcoin directly and via OpenCollective. Many instances do as well. Bitcoin is free, federated, open source software and protocol for money, it kinda makes sense that there’s some crossover there. https://join-lemmy.org/crypto
If you want a platform with built-in tipping, especially a federated, open-source one, you can’t use PayPal, the fees make microtransactions impossible. Same with basically every other competitor out there. You either need to make your own payment processor (millions of dollars, massive yearly overhead, you have to handle dispute resolution, you need to forge independent relationships with Visa/MC/Amex/Plaid/etc, transactions all have different settlement times sometimes measured in weeks, it’s an absolute bird’s nest of problems. And that’s just to do it for the US.). And each instance would have to have their own payment processor. It’s a nightmare. Or, simple idea, you can just use some type of cryptocurrency.
You choice to avoid it is yours alone, but it seems like a weird thing to be mad about and avoid social networks on the basis of. Do you have such strong reactions to other assets like stocks? Or other currencies? Would you not use Facebook because users could use Turkish Lira on it to pay for extra photo storage? I don’t love the Turkish government, but it seems like a weird place to draw a line in the sand over which social networks I’ll use.
If you don’t like the Bitcoin feature, you don’t have to use it. Bitcoin has a market cap that puts it in the top 25 countries by GDP. Higher than Sweden. It’s been doing its thing for 15 years. People may say they don’t like it, but if you decide to not use any platform or service which accepts or uses Bitcoin, your circle of places you can use is going to continue to get smaller. Have fun not shopping at Safeway or any other major grocery store since they all have Bitcoin ATMs in the form of Coinstars. Have fun not using mutual funds or other investment portfolios from major banks or index funds since they all have a degree of exposure to Bitcoin. Have fun not using cash app or other major payment platforms which feature some kind of Bitcoin integration. Have fun not being able to use the DMV in colorado where you can renew your license with Bitcoin, and you won’t be able to ride public transit in Argentina. Bitcoin is global and adoption grows year on year.
“Crypto” is full of scams and rug pulls and bad actors. But Bitcoin has kept its promises to faithfully relay transactions without a single hack or day of downtime for 15 years. They are not the same.
I’d just like to interject for a moment. What you’re referring to as Nostr, is in fact platforms using Nostr as its protocol to communicate with each other. You see, ‘Nostr’ is just the protocol. But when you add the wide range of available clients, it becomes a fully functional fediverse. So, it’s more fittingly dubbed clients powered by Nostr!
Yes very true!
i also would like to interject and say i don’t trust anyone involved with the development or promotion of nostr.