unrelated: @OP looks like you accidentally posted this many times. Imo would be good to delete the others to keep the conversation in 1 place.
unrelated: @OP looks like you accidentally posted this many times. Imo would be good to delete the others to keep the conversation in 1 place.
I generally agree. The system is utterly rotten.
Only thing I’d mention slightly counter to that is peer review - as a process - is still something I believe is useful.
That is, the process of people with relevant domain expertise critiquing methodology, findings etc. When its done right, it absolutely produces better results which everyone benefits from.
Where it fails is when cliques and ingroups are resistant to change on principle, which is ofc actually an anti-scientific stance. To put it another way, the best scientist wants to be proven wrong (or less correct) if that is indeed the truth.
It also fails, as you identify, when the corrupt rot of powerful publishers (who are merely leeches) gate-keep the potential for communicating alternate models.
It also fails where laypeople parrot popsci talking points without understanding that peer review is far from infallible. Even the best of the best journals still contain errors - any genuine scientist is the first to admit this. Meanwhile popsci enthusiast laypeople think that just because something was printed in any journal, that it must be unequivocally 100.000% truth, and are salivating at the opportunity to label any healthy dose of skepticism as “antiscience” or “conspiracy theorist” etc.
It also seems to fail when popsci headlines invariably don’t include the caveats all good scientists include with their findings etc.
Final point which I think would help enormously is its very very difficult to get funding or high worth publications in reproduction. The obsession with novelty is not only unhealthy, it’s unproductive.
Reproduction is vastly undervalued. Sadly its not easy to get funding or support for ‘merely’ reproducing recent results. There’s two reasons why this should change, firstly it will ofc help with the reproducibility crisis, and it will also afford upcomers excellent opportunities to sharpen their skills, and properly prepare for future ground-breaking work. To put another way, when reading a novel paper you think you understand it. Only when you take it to the lab do you truly understand.
not sure if you’re being sarcastic, but if anything this news paints linux deployment in an even better light.
cos nothing proves “microsoft <3 opensource” like releasing a project over ⅓ of a century old
great news, openai keeps getting more open by the minute
/s
easiest question we’re gonna answer all year:
fuck microsoft and their stooges
opensource driver hackers ftw!!
agreed the existing system is deeply flawed and currently on a trajectory to critical failure.
regarding peer review itself, this is another point. people regard peer review as this binary thing which takes place prior to publication and is like a box which is ticked after publication.
which is ofc ridiculous, peer review is an ongoing process, meaning many of the important parts take place after publication. fortunately this does happen in a variety of fields and situations, however not being the norm leads to a number of the issues in discussion. further it creates an erroneous mindset that simply because something has been published that its now fully vetted, which is ofc absurd.
also agreed, the process should be blind. i believe it often already means the reviewer’s identities are hidden, but i also agree the authours should be hidden during the process too.
don’t see the role as unpaid being a problem though, introducing money would complicate things alot and create even more conflicts of interest and undermine what little integrity the process still has.
i really love your idea of standardising the process in a network-like protocol. this would actually make an excellent RFC and i’d totally support that.
on a similar vein, this is why i’ve been advocating for a complete restructuring of support given to reproduction. as you mentioned, the current process is vulnerable to a variety of human network effects. and among other issues with that problem, i also see the broken reproduction system playing a role here.
as it currently stands, reviewers can request more explanation or data, introduction of changes/additional caveats etc or reject the paper entirely. what this means is a reviewer can only really gauge whether something sounds right, or plausible. and as you correctly identify, certain personalities or flavours of prevailing culture will play a role in the reviewer’s assessment of what merely seems like it’s plausible or correct etc. this has shown to make major breakthroughs more difficult to communicate and face unfair resistance, which has frankly held back society at large.
whereas if there was an organised system of reproduction it’s no longer left to just a matter of opinion in how something sounds. this is ofc how its supposed to work already, and sometimes does, but all too often does not. imo it would be a great detail to include in your idea for a protocol-based review process.
i don’t envision this as always being something which must take place prior to publication, it can and should be an ongoing process. where papers could have their classification formerly upgraded over time. currently the only ‘upgrade’ a paper really receives is publicity or number of citations. the flaws of which are yet another discussion again.