- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
DuckDuckGo, Bing, Mojeek, and other search engines are not returning full Reddit results any more.
I am hoping (probably naively so) that lemmy’s stock of technical answers will continue to grow and eventually become a half decent archive for people to search for potential solutions.
tbh I’ve never seen a Lemmy link when searching for stuff. Is it too small to show up? Or do search engines not index Lemmy instances?
A lot of Fediverse admins are just normal people like you and me with a budget, and disallowing bots and spiders helps save bandwidth, and the budget.
Yep. I block all bots to my instance.
Most are parasitic (GPTBot, ImageSift bot, Yandex, etc) but I’ve even blocked Google’s crawler (and its ActivityPub cralwer bot) since it now feeds their LLM models. Most of my content can be found anyway because instances it federated to don’t block those, but the bandwidth and processing savings are what I’m in it for.
Teach me oh wise one
Kinda long, so I’m putting it in spoilers. This applies to Nginx, but you can probably adapt it to other reverse proxies.
- Create a file to hold the mappings and store it somewhere you can include it from your other configs. I named mine
map-bot-user-agents.conf
Here, I’m doing a regex comparison against the user agent (
$http_user_agent
) and mapping it to either a0
(default/false) or1
(true) and storing that value in the variable$ua_disallowed
. The run-on string at the bottom was inherited from another admin I work with, and I never bothered to split it out.'map-bot-user-agents.conf'
# Map bot user agents map $http_user_agent $ua_disallowed { default 0; "~CCBot" 1; "~ClaudeBot" 1; "~VelenPublicWebCrawler" 1; "~WellKnownBot" 1; "~Synapse (bot; +https://github.com/matrix-org/synapse)" 1; "~python-requests" 1; "~bitdiscovery" 1; "~bingbot" 1; "~SemrushBot" 1; "~Bytespider" 1; "~AhrefsBot" 1; "~AwarioBot" 1; "~GPTBot" 1; "~DotBot" 1; "~ImagesiftBot" 1; "~Amazonbot" 1; "~GuzzleHttp" 1; "~DataForSeoBot" 1; "~StractBot" 1; "~Googlebot" 1; "~Barkrowler" 1; "~SeznamBot" 1; "~FriendlyCrawler" 1; "~facebookexternalhit" 1; "~*(?i)(80legs|360Spider|Aboundex|Abonti|Acunetix|^AIBOT|^Alexibot|Alligator|AllSubmitter|Apexoo|^asterias|^attach|^BackDoorBot|^BackStreet|^BackWeb|Badass|Bandit|Baid|Baiduspider|^BatchFTP|^Bigfoot|^Black.Hole|^BlackWidow|BlackWidow|^BlowFish|Blow|^BotALot|Buddy|^BuiltBotTough| ^Bullseye|^BunnySlippers|BBBike|^Cegbfeieh|^CheeseBot|^CherryPicker|^ChinaClaw|^Cogentbot|CPython|Collector|cognitiveseo|Copier|^CopyRightCheck|^cosmos|^Crescent|CSHttp|^Custo|^Demon|^Devil|^DISCo|^DIIbot|discobot|^DittoSpyder|Download.Demon|Download.Devil|Download.Wonder|^dragonfl y|^Drip|^eCatch|^EasyDL|^ebingbong|^EirGrabber|^EmailCollector|^EmailSiphon|^EmailWolf|^EroCrawler|^Exabot|^Express|Extractor|^EyeNetIE|FHscan|^FHscan|^flunky|^Foobot|^FrontPage|GalaxyBot|^gotit|Grabber|^GrabNet|^Grafula|^Harvest|^HEADMasterSEO|^hloader|^HMView|^HTTrack|httrack|HTT rack|htmlparser|^humanlinks|^IlseBot|Image.Stripper|Image.Sucker|imagefetch|^InfoNaviRobot|^InfoTekies|^Intelliseek|^InterGET|^Iria|^Jakarta|^JennyBot|^JetCar|JikeSpider|^JOC|^JustView|^Jyxobot|^Kenjin.Spider|^Keyword.Density|libwww|^larbin|LeechFTP|LeechGet|^LexiBot|^lftp|^libWeb| ^likse|^LinkextractorPro|^LinkScan|^LNSpiderguy|^LinkWalker|msnbot|MSIECrawler|MJ12bot|MegaIndex|^Magnet|^Mag-Net|^MarkWatch|Mass.Downloader|masscan|^Mata.Hari|^Memo|^MIIxpc|^NAMEPROTECT|^Navroad|^NearSite|^NetAnts|^Netcraft|^NetMechanic|^NetSpider|^NetZIP|^NextGenSearchBot|^NICErs PRO|^niki-bot|^NimbleCrawler|^Nimbostratus-Bot|^Ninja|^Nmap|nmap|^NPbot|Offline.Explorer|Offline.Navigator|OpenLinkProfiler|^Octopus|^Openfind|^OutfoxBot|Pixray|probethenet|proximic|^PageGrabber|^pavuk|^pcBrowser|^Pockey|^ProPowerBot|^ProWebWalker|^psbot|^Pump|python-requests\/|^Qu eryN.Metasearch|^RealDownload|Reaper|^Reaper|^Ripper|Ripper|Recorder|^ReGet|^RepoMonkey|^RMA|scanbot|SEOkicks-Robot|seoscanners|^Stripper|^Sucker|Siphon|Siteimprove|^SiteSnagger|SiteSucker|^SlySearch|^SmartDownload|^Snake|^Snapbot|^Snoopy|Sosospider|^sogou|spbot|^SpaceBison|^spanne r|^SpankBot|Spinn4r|^Sqworm|Sqworm|Stripper|Sucker|^SuperBot|SuperHTTP|^SuperHTTP|^Surfbot|^suzuran|^Szukacz|^tAkeOut|^Teleport|^Telesoft|^TurnitinBot|^The.Intraformant|^TheNomad|^TightTwatBot|^Titan|^True_Robot|^turingos|^TurnitinBot|^URLy.Warning|^Vacuum|^VCI|VidibleScraper|^Void EYE|^WebAuto|^WebBandit|^WebCopier|^WebEnhancer|^WebFetch|^Web.Image.Collector|^WebLeacher|^WebmasterWorldForumBot|WebPix|^WebReaper|^WebSauger|Website.eXtractor|^Webster|WebShag|^WebStripper|WebSucker|^WebWhacker|^WebZIP|Whack|Whacker|^Widow|Widow|WinHTTrack|^WISENutbot|WWWOFFLE|^ WWWOFFLE|^WWW-Collector-E|^Xaldon|^Xenu|^Zade|^Zeus|ZmEu|^Zyborg|SemrushBot|^WebFuck|^MJ12bot|^majestic12|^WallpapersHD)" 1; }
Once you have a mapping file setup, you’ll need to do something with it. This applies at the virtual host level and should go inside the
server
block of your configs (except the include for the mapping config.).This assumes your configs are in conf.d/ and are included from nginx.conf.
The
map-bot-user-agents.conf
is included above theserver
block (since it’s anhttp
level config item) and insideserver
, we look at the$ua_disallowed
value where 0=false and 1=true (the values are set in the map).You could also do the mapping in the base
nginx.conf
since it doesn’t do anything on its own.If the
$ua_disallowed
value is 1 (true), we immediately return an HTTP 444. The444
status code is an Nginx thing, but it basically closes the connection immediately and wastes no further time/energy processing the request. You could, optionally, redirect somewhere, return a different status code, or return some pre-rendered LLM-generated gibberish if your bot list is configured just for AI crawlers (because I’m a jerk like that lol).Example site1.conf
include conf.d/includes/map-bot-user-agents.conf; server { server_name example.com; ... # Deny disallowed user agents if ($ua_disallowed) { return 444; } location / { ... } }
I’ve always been told to be scared about
if
s in nginx configsYeah,
if
’s are weird in Nginx. The rule of thumb I’ve always gone by is that you shouldn’t try toif
on variables directly unless they’re basically pre-processed to a boolean via amap
(which is what the user agent map does).
So I would need to add this to every subdomain conf file I have? Preciate you!
I just include the
map-bot-user-agents.conf
in my basenginx.conf
so it’s available to all of my virtual hosts.When I want to enforce the bot blocking on one or more virtual host (some I want to leave open to bots, others I don’t), I just include a
deny-disallowed.conf
in theserver
block of those.deny-disallowed.conf
# Deny disallowed user agents if ($ua_disallowed) { return 444; }
site.conf
server { server_name example.com; ... include conf.d/includes/deny-disallowed.conf; location / { ... } }
- Create a file to hold the mappings and store it somewhere you can include it from your other configs. I named mine
I have two questions. How much do those bots consume your bandwidth? And by blocking search robots, do you stop being present in the search results or are you still present, but they do not show the content in question?
I ask these questions because I don’t know much about the topic when managing a website or an instance of the fediverse.
How much do those bots consume your bandwidth?
Pretty negligible per bot per request, but I’m not here to feed them. They also travel in packs, so the bandwidth does multiply. It also costs me money when I exceed my monthly bandwidth quota. I’ve blocked them for so long, I no longer have data I can tally to get an aggregate total (I only keep 90 days). SemrushBot alone, before I blocked it, was averaging about 15 GB a month. That one is fairly aggressive, though. Imagesift Bot, which pulls down any images it can find, would also use quite a bit, I imagine, if it were allowed.
With Lemmy, especially earlier versions, the queries were a lot more expensive, and bots hitting endpoints that triggered a heavy query (such as a post with a lot of comments) would put unwanted load on my DB server. That’s when I started blocking bot crawlers much more aggressively.
Static sites are a lot less impactful, and I usually allow those. I’ve got a different rule set for them which blocks the known AI scrapers but allows search indexers (though that distinction is slowly disappearing).
And by blocking search robots, do you stop being present in the search results or are you still present, but they do not show the content in question?
I block bots by default, and that prevents them from being indexed since they can’t be crawled at all. Searching “dubvee” (my instance name / url) in Google returns no relevant results. I’m okay with that, lol, but some people would be appalled.
However, I can search for things I’ve posted from my instance if they’ve federated to another instance that is crawled; the link will just be to the copy on that instance.
For the few static sites I run (mostly local business sites since they’d be on Facebook otherwise), I don’t enforce the bot blocking, and Google, etc are able to index them normally.
Thanks for the explanation and it was clear to me.
Could it be possible to have one major global instance that aggregates everything so it can be indexed by search engines? Would that work? Or do I not fully understand how federation works?
That would defeat the purpose of federation.
It becomes a central choke point of moderation. Who gets to decide what instances are part of global and which ones aren’t. Because a free for all isn’t going to end well. And then you’re back at Reddit.
I wonder if you could have an instance federated to every other instance just for archived purposes, to save the data on every other instance’s post and comment. Because copies of posts and comments are saved to federated instances, too, right? Or do I understand the tech wrong?
So it could have an admin team but no users, to prevent people worried about spammers and bots joining that instance to get around defederation rules. Maybe it just has a bot that crawls Lemmy, looking for instances to federate to. Could that work?
You’re describing Meta’s plan but yes that could work.
Godamnit Meta… Lol
I prefer the Internet Archive plan than the Meta Plan.
Right, but having a centralised search index thingy is better than none at all. Maybe there could be something where it’s a joint effort from admins from many of the biggest servers, idk if that would work.
Lemmy search already is quite excellent… at least here on lemm.ee, we don’t have many communities but tons of users subscribed to probably about everything on the lemmyverse so the servers have it all.
It might be interesting to team up with something like YaCy: Instances could operate as YaCy peers for everything they have. That is, integrate a p2p search protocol into ActivityPub itself so that also smaller instances can find everything. Ordinary YaCy instances, doing mostly web crawling, can in turn use posts here as interesting starting points.
I just wish lemmy search itself wasn’t broken…
Gotta keep some things that feel like reddit.
All a spider needs is an instance to download everything.
I was worrying about precisely this. I’d be ok with blocking search engines if there was a better way of searching but AFAICT there isn’t federated search of any kind?
Really? I thought they were free and didn’t affect bandwidth.
Any data transit costs money. Both in the data transit itself and in the increased server resources to respond to the web queries in the first place.
Ah that makes sense not really familiar iwth this stuff so didn’t think it’s that intensive lol
bots take resources to serve just like any regular user
They usually only index text though
I’ve seen it a couple of times when searching on DDG.
I’ve seen some when I appended “Lemmy” just like “Reddit”. But it relies on lemmy being in the domain name.
Also I assume even when people click on those results, they don’t get ranked much higher because it’s so many different domains while reddit is just one.
Kagi has a button that lets you search fediverse forums. I haven’t tested it yet though.
Edit: yup, works like a charm!
Adding lemmy does nothing for me, it searches for Lemingrad or some shit.
Most of the originalish content on lemmy are linux related stuff, memes and porn. The latter 2 are mostly image/video based, so you don’t search for that very frequently and easily. I can see that in the future it will become a very relevant source of info in linux admin and user circles.
I go back to r*ddit sometimes for some local content which is non existent on lemmy. I see that the tech related subs are mostly dead there, or at least only shadows of their former selfs. E.g. go to r/linux, sort by top all time. In the first 100 results you will barely find anything posted after the exodus.
Yeah, the notion that Lemmy is a Reddit replacement is misguided. It definitely doesn’t have the same Q&A balance Reddit does. It feels a lot more like 90s and early 2000s forums than the large-scale self-service link and customer service churn Reddit encourages.
Which I’m all for. I was never a Reddit guy and I do like it here. But in terms of how bad it is now that Reddit is not happy to host most of the actually useful online content for free… well, that’s a different conversation.
Yeah I mostly go back for r/BestofRedditorUpdates to get my trash drama fix and r/nursing to commiserate with my people. I’ve tried bringing in more hcw communities but sometimes its tiring to be the first of a few to move over. It elicits some pretty strong feelings of isolation.
deleted by creator
You can always add “site:lemmy.world” to your search (remove the quotes). I commonly do that, as well as the same for reddit or stack overflow.
The problem with that is, lemmy.world is only one of many different instances. Too bad there isn’t a way to add a modifier that searches the entire fediverse.
yea i’ve been doing “inurl:lemmy” for that reason
from the top of my head, that won’t include lemm.ee, sopuli, beehaw, szmer.info, slrpnk.net, sh.itjust.works, or other threadiverse instances like kbin/mbin.
You’d miss instances that don’t use “lemmy” in the URL, but it’s at least a better solution than specifying a single instance.
Appending
(intext:“modlog” & “instances” & “docs” & “code” & “join lemmy”)
to your search query will search most instances. Works with Google, Startpage, SearXNG afaik.Very nice, thanks!
Was able to find this thread:
(Heh, when testing this sanitized URL from the thousand character monster it was before, Google asked me if I was a bot. I think parentheses and stuff make them suspicious.)
One of the major problems with Lemmy is that many posts get deleted and that nukes the comment section (which is where most of the answers will be).
I wish Lemmy deleted posts closer to how Reddit deletes posts - the post content should be deleted, but leave the comments alone.
Searx will show Lemmy results, at least on some Searx instances.
Twice I have come across links to lemmy, definitely not the norm though.
I’m inclined to think due to the nature of the platform, contents are constantly duplicated to the eyes of search engines, which hurts authoritativeness of each instance thereby hurts ranking.
Problem is that we’ll probably need a dedicated search engine for that. As answers are spread in lots of instances, some of them without “lemmy” in the name, I assume.
Seems like a solvable problem though. We have a list of federated servers inately built into activitypub, right? Just need to tag results from those servers as being linked to a “lemmy” keyword search.
I’m sure I’m oversimplifying it, but all the pieces are there, just need search engines to be smart about how they index. Since there are a couple of federation based models that would be good to index, not just lemmy, it would probably behoove them to figure it out.
Yeah but we need non technical stuff too which i what i hate about the ui and stuff mot trying to be made simpler for non tech people to start using lemmy. I want doctors, lawyers, and casual people asking questions about everyday items and stuff so i can search “best sleep mask lemmy” or any product category and find good discussions. Would also help if lemmy.com was an instance instead of just redirecting to lemm.ee
I think we have to contribute our hours of UX assistance to see changes there. The brilliant engineers who donate their time probably both focus on working features first and specialize more in technical problem-solving than visual design.
I use voyager and its great but people will discover it through the website so its should be improved/simplified as well
Highly doubtful.
The few times I have bothered to ask technical questions I mostly get one of the following:
- Ideological ranting. “The problem is you aren’t running arch linux in that corporate environment with proprietary hardware you need to interface with”
- Complete refusal to read the question. “I totally didn’t read that you said Foo was not viable for reasons XYZ but you should use Foo”
- Complete nonsense
Reddit has a lot of that too but ALSO has the institutional knowledge of people who actually care enough to answer. Similar to stack overflow.
I try to help where I can but this is an enthusiast “site”. So you have all the people who suggest all the crap they heard on linus tech tips rather than “Okay, for my day job we use X but no sane person should use that at home. Look into Y”.
That said: I have said it before and I’ll say it again. The age of the online message board for tech support is long gone. Because the super useful results might be talking about a bug from five years ago rather than a bug from today. The answer really is ephemeral discord servers.
Ephemeral discord servers are awful because they don’t scale and they can only ever help the lowest common denominator of questions/issues. We need something else, but it has yet to present itself as a solution.
I’m sure you’ve had bad experiences but they actually scale as well as any forum ever did and are great because the general vibe is not “This was asked ten years ago, go figure out some search terms” and more actually responding to and helping people.
The key is to have a moderated support channel.
It’s a never ending onslaught of beginner questions and experienced folks with domain knowledge burn out. I’m sure it’s good when it’s new and fresh and everyone is exited to participate, but that wears out. It’s why things went away from mailing lists, or why mailing lists started getting archived, so they could be searched.
I guess with most things it comes in cycles, and we’re at the on demand answers cycle right now.
That has not at all been my experience over the years.
Yes, the vast majority of questions are “beginner questions”. Which… is true no matter where you go.
But when someone has the ability to articulate a “real” problem? Everyone comes out of the woodwork because that is actually interesting. And there is a very strong communal feeling of “we all have the same problem and are trying to collect data”
Usually when I try to get help with a “real” problem through discord, I get crickets
Drop a link to a few places people ask tech questions and I will do my part to contribute
Really for technical answers things should be on a forum. Troubleshooting a linux distro, post on the distro’s forums. Troubleshooting a piece of software, make an issue on its codeberg/github/gitlab/etc. It makes sense that if you’re having an issue with a specific thing you ask for help on a forum dedicated to that thing. I don’t think it’s a positive that things are becoming centralised onto generalised social media, even for more decentralised federated social media like Lemmy. It just makes support for a given piece of software more spread out and harder to find.
The most annoying thing with that is that you need an account for every little forum that you want to post in. But you are still right.
This might be a hot take, but I hope that we as a platform are toxic enough to advertisers so that big tech’s enshittification and advertising never becomes a problem.
Twitter still has advertisers.
As I understand it, Lemmy, being FOSS, is pretty immune to this since there are no big tech shareholders to appease. Lemmy is susceptible to EEE (embrace, extend, extinguish) via something like Threads, however.
To be fair, Reddit is no longer that good of a source for answers in the later years.
Quality drop in comments is insane. Sometimes it looks like Quora.
Also my collection of hobbies seems to match up well with the people who nuked their post history after the API-ocalypse. Even when I get good search results I click through and… so many deleted comments…
As someone who hobby hops I’ve had to just accept reddit is just not a viable option anymore. I’ve been using YouTube (revanced) to learn and get tips. I miss the interactions with people though…
It irritates me that so many forums and media sites allow you to edit your posts at will. There’s one site I go to that I like very much - it has a 5 minute edit window, and after that, your post can no longer be edited. You can’t change what you said, pretend you never said things, etc, once you say something it remains. It would be nice if more sites were like that. Or at least, if you edit/delete something, for there to be an option to check the history to see what it used to be, so if you try to delete some comment you made people can still check it. Whether it’s informational, or it’s because you’re trying to hide something you said that you realize was actually super shitty and people are getting angry at you for it, I prefer things to stick.
Nah, people should be able to take back what they said. No humans in all of history had to account for every thing thing they ever said. Better to let the past be forgotten. Can that be abused? Sure. But I think there’s value in letting people realize what they said was bad and take it down.
I think there’s a happy middle ground where deletion just disassociates the comment with you. It will show (deleted) or something but the original text remains.
Maybe there should be exclusions for personal or identifying information in such a system.
text can easily be traced back to its original writer nowadays using AI statistical analysis. especially on internet forums where people are not necessarily worrying about grammar and accuracy.
If you’re going that far you could probably just pull an old cached version of the page from before it was deleted
yes, that too!
Better to acknowledge it in a response. I prefer to do that myself if I’m wrong or something of that nature, post a reply acknowledging instead of trying to cover up that I was ever wrong in the first place.
I agree with you, and I never delete what I post unless it was straight-up a glitch or a typo or something. But, I still think other people should have that option if they want it.
I was looking for Bluetooth speakers recommendations and it’s the first time I really noticed “generic bot replies” like “I’ve got this great product to recommend, not only is it good but it offers great sound quality as well! The product is [link to Amazon page]”
Gotta start searching using “before:” to get quality results…
I’m seeing bots promoted and sold to generate those kinds of replies, RIP internet, looking forward to SSN/DNA+background check review verification (I kid but I half dream of that privacy nightmare partially plugging the review fraud hole).
I’m with you on embracing the privacy nightmare to kill off cheaters in games. Tie an account to a real identity and that problem will quickly reduce.
Am I bbbrrrregnant?
If we consider all possible outcomes on a galaxy scale, then No.
Lemmy Will be king soon…is there a Lemmy search engine?
Haha wtf are you saying Lemmy is now like what reddit was in 2016 and by the end of next year it will be like reddit is now.
Same reason for the downfall too: Activist mods and hands-off admins that don’t care if their mods harass.
Only a matter of time
Lemmy needs people with a wider range of expertise and interests. Right now it’s more niche than reddit c.2010
This is patently untrue. I want Lemmy to be successful as much as any other user on here but reddit had 150-200 million MAU in 2016. Lemmy is being recorded as a generous 2 million.
Sometimes quora looks better.
Unfortunately often the only source when searching for something like a specific error with a SaaS product.
Seems insulting to quora haha
They should include reddit in the list of search engines that don’t work well with reddit
Seems anticompetitive
Ding ding ding, winner winner chicken dinner
After seeing this news I just created this lemmy account. I hope people make the right decision and move on to lemmy.
It’s pretty good here.
And will continue to get better.
Heya jack, welcome aboard!
welcome, but maybe consider not using the world instance. it is pretty saturated and the point is to spread users out across many instances instead of having one monolithic one
Welcome.
Same here
Then welcome to you too! There’s a nice selection of apps if you haven’t tried them, since Lemmy has no financial incentive to limit access to the content.
Welcome!
Welcome and don’t feel shy to contribute!
If you use Bing, DuckDuckGo, Mojeek, Qwant or any other alternative search engine that doesn’t rely on Google’s indexing and search Reddit by using “site:reddit.com,” you will not see any results from the last week.
That’s absolutely insane… Reddit truly is making things awful. The “just add reddit” or “just add site:reddit.com” has been trash for a while because they bombard you with the “pwease use the app” and not showing more than like three comments at a time. It’s useless.
Reddit truly is making things awful.
They’re no longer interested in driving traffic to the site, is my guess. They’re far more interested in devising new ways to extract rents from the existing participant base. So rather than pay Google to prioritize their site, or incentivize Google to link to their site with internal content hygenie techniques, now they’re getting paid by Google to exclusively serve up content.
It’s useless.
The sheer volume of junk content, the amount of content that just shows up as deleted or archived, and the rate at which I’m served “Reddit” as a source of data when there’s no conceivable reason why it should be near the top of my search list is very frustrating.
I don’t get why Google would agree to pay for anything. Google can survive without Reddit, but Reddit would be hurt without Google and would eventually be forced to give in. Where’s that corporate greed when you need it?
I don’t get why Google would agree to pay for anything.
Exclusivity, both for boosting better access to internal reddit data and for harvesting that data into their AI models, presumably.
Oh yeah, maybe it was a package deal and they only really cared about the training data.
The “just add reddit” or “just add site:reddit.com” has been trash for a while
Has that ever been true? I always assumed it was some sort of shadow marketing campaign to get people to look at reddit more. Pretending that one website is the only reliable source of answers on the internet is incredibly audacious, it always seemed very farfetched to suggest that
It’s not that it was the “only” source, it’s just that it would fit out a bunch of garbage articles like “top 10 best ways to blah blah blah”
“pwease use the app”
the ublock origin annoyances list can filter this out at least, i strongly recommend it
This must be something extra to enable, how do I do it?
Ahh, in the settings for the extension. I just enabled all lists. Is there any real reason not to?
there are redundant lists. that and more lists = slower browsing.
I mean, it’s unlikely anything of value has been posted to reddit in the last week anyway. Or like the last 2 years.
It’s honestly a travesty what’s happened to Reddit. If I want to search for a forum topic or something where random people give their honest opinions, Reddit was about the only place left on the internet and now that’s gone too.
I mean, the people still exist and the need for honest opinions is still there. We just need to find a new place where money isn’t such a big problem (although it will always be a problem to some degree). I really think a more stable and easy to use Lemmy could attract a large crowd.
Reddit has been viral marketing for over a decade. Very little of what is on there should be taken at face value in terms of reviews of products. The only thing it’s good for is to find information about fixes for things or some very broad generic info.
The recent crowdstrike debacle had a fix on their subreddit before it was communicated anywhere else. Stuff like that is still relevant at least.
Niche subreddits were still good. Fuck the mains, but there was a lot of really good content, even very technical, in tightly focused communities like r/LocalLLaMA, etc. In a lot of ways the format of how conversations flow there work better (and worse) than stackoverflow. Still is good content, but I really can’t bring myself to go there because of the nasty shenanigans that spez put the communities through.
I really hoped for a while that Reddit would be the one to break the embrace, extend, extend, enshittify mold that so many great techs succumb.
But everyone has a sellout price and so the EEEE seems to be a law of nature.
We still have Lemmy
That’s fine. I don’t want Reddit results anyway.
Unfortunately (for me, I guess), appending ‘reddit’ is still the way to go for many queries…
I’ve notice a huge decline in the reliability of using this now that every company on earth knows about this trick. You’ll just be served up posts with 0-3 comments and few upvotes when searching for product reviews or recommendations for example.
The way the article is written you’ll still get existing information, just not new posts.
Really? For what queries though? I mostly look up tech troubleshooting when ime there are much higher quality results from forums or sometimes stack exchange sites
Yeah for tech stuff stack exchange is the way to go. But if I’m looking for info on non-tech stuff, there’s not really a site that I know of that has a bunch of general user submitted q&a. There’s quora, but that’s absolutely horrible.
Also even with stack exchange, the rules are a bit more strict there, low effort posts are uncommon which can mean you can’t find some stuff there. Although usually there’s some other forum you can find your answer on.
Great, neither Google search or reddit work anymore. They deserve each other.
Every time I click a Reddit link now it’s just “download the app to verify your age” regardless of what it is
I feel your pain.
I edit the URL to remove the first part of the URL and replace it with “http://old.reddit.com”. That still seems to work, last I checked, but I fully expect it to be killed any day now.
There’s a firefox extension “old reddit redirect” that’ll do this for you. Been using it for years. But yeah any day now I expect old reddit to be offline.
I don’t, I’m convinced it’s a “if you raise the price of the hotdog, I will kill you” kind of deal (as in, some of the devs still use the old UI for themselves)
I mean, I would’ve said the same about the mobile apps, but here we are.
That’s different, they didn’t remove the apis that enable mobile apps, they just made them unreasonably expensive. But you can theoretically still use them
in practical terms the apps were killed anyway though
btw i think fatbird still is free and works on mobile (if you need an app)
Removed by mod
deleted by creator
Oh good, I can’t view most reddit threads without an account anymore, so it’ll be nice to see those results go away.
You actually can if you replace “www” with “old.”
Actually you can’t anymore
Yes you can. I literally just did it now to check, and it works fine. I get the “download the app” nonsense, edit the URL, then I can see the content just fine. It works every time.
I use reddit without being logged in a bunch. Worked fine earlier today! I’m on Firefox if that matters.
I wasn’t aware of this, when did it start? So far, it has never happened to me not to be able to view reddit threads
They’re so scared from ai scrapers stealing “their” content that they blocked wide ranges of IP address
I can’t see it from my VPN for example
Is Reddit dead yet?
No, they’re transitioning to different use bases that don’t give a shit when they’re screwed over.
Go to reddit now and it’s celebrity drama, pop music, and Indian sub reddit. Oh and Love Island.
And astroturfing AI-bots. So many bots.
Oh yes! They have a quite popular sub where people actively hate Taylor Swift. The sub’s mods remove your comments and threaten to ban you if you ask: “Guys, are you okay? Is this how you want to live your life?” I don’t care about her much, but this is so bizarre. Absolute madness.
Some subs are very negative. I remember antimlm was, as well, people spent all day mocking people who got involved in MLM schemes. I subbed for a while because my wife was in one, and I was hoping for help, advice, tips, etc., but it was all just mocking them and calling them “hon bots” or whatever the term was. I had to unsub, I don’t need that negativity in my life.
The hobby subs, on the other hand, always seemed extremely supportive, or at least the ones I was in were. For example, the radio control (cars, planes, boats, etc.) sub members were totally into it and totally supportive of whatever it was you were doing. It was inspiring and made me want to get back into the hobby. Those kinds of subs were the best part of reddit, and unfortunately we haven’t recreated that energy for things like that here - just not enough users.
Doesn’t look like it, far too many groups that don’t care about the degradation
deleted by creator
I think Reddit is already in the “too big to die” stage, but I hope you will be right.
For example Elon made everything significantly more awful on Twitter, seemingly like his objective is to just kill Twitter, but it’s still only lost like 15% of users, after ALL the dramas.
Sadly, no
They’re sure giving it their best half-assed try.
More like undead.
Fuck You Reddit