• 0 Posts
  • 366 Comments
Joined 9 months ago
cake
Cake day: September 27th, 2023

help-circle

  • Depending on where you’re going, you may not need to worry about it much. When I was in postsecondary education, there wasn’t much handwriting required. And I graduated 13 years ago; certainly things have gone more online since then. You might want to check with a current student in your field of study at your university and see what the handwriting requirements are. Make sure to ask whether cursive is a dealbreaker.

    If it is something you’re going to need to work on, there’s really no getting around it: you’re going to need to practice. Cursive or print, you’re going to need to practice it. Get a big notebook, and something to write (hopefully something you’re actually interested in), and just start writing. Transcribe a TV show as you’re watching it. Copy a book line-for-line. You get good at the things you do a lot, and so you’re going to have to write a lot.

    Also, I would recommend slowing down. My handwriting is great when I’m writing slowly but can be terrible when I speed up if I don’t pay attention. Slow down to start; if it’s still not legible, slow down even more. Make sure you aren’t practicing your existing bad habits. Then, as you practice, be deliberate: focus on each individual letterform, and as you become more comfortable writing legible letters, try to pick up the pace.

    There are other things that you might find help you out: try practicing on wide-ruled paper, rather than college-ruled, for instance. Try a pencil or pen which moves more roughly across the page, for more tactile response. Make sure your pen or pencil is making strong, clear marks so that it’s obvious what legibility issues are your hand (and not just a bad implement).

    You can change your writing style; I have, on a couple of occasions. It just takes practice.












  • AI, used in small, local models, as an assistance tool, is actually somewhat helpful. AI is how Google Translate got so good a decade or so ago, for instance; and how assistive image recognition has become good enough that visually-impaired people can potentially access the web just as proficiently as sighted people. LLM-assisted spell check, grammar check, and autocomplete show a lot of promise. LLM-assisted code completion is already working decently well for common programming languages. There are potentially other halfway decent uses as well.

    Basically, if you let computers do what they’re good at (objective, non-creative, repetitive, large-dataset tasks that don’t require reasoning or evaluation), they can make humans better at what they’re good at (creativity, pattern-matching, ideation, reasoning). And AI can help with that, even though they can’t get humans out of the loop.

    But none of those things put dollar signs in VC’s eyes. None of those use cases get executives thinking, “hey, maybe we can fire people and save on the biggest single recurring expense any corporation puts on their balance sheet.” None of these make worried chip manufacturers breathe a sigh of relief that they can continue making the line go up after Moore’s Law finally kicks the bucket. None of those things make headlines in late-stage capitalism. Elon Musk can’t use any of those things as smokescreens to distract from his mismanagement of the (formerly) most consequential social media brand in history. None of that gives former crypto bros that same flutter of superiority.

    So the hype gets pumped up to insane levels, which makes the valuations inflate, which makes them suck up more data heedless of intellectual property, which makes them build more power-hungry data centers, which means they have to generate more hype (based on capabilities the technology emphatically does not have and probably never will) to justify all of it.

    Like with crypto. Blockchain showed some promise in extremely niche, low-trust environments; but that wasn’t sexy, or something that anyone could sell.

    Once the AI bubble finally breaks, we might actually get some useful tools out of it. Maybe. But you can’t sell that.




  • Google wants that to work. That’s why the “knowledge panels” kept popping up at the top of search before now with links to Wikipedia. They only want to answer the easy questions; definitions, math problems, things that they can give you the Wikipedia answer for, Yelp reviews, “Thai Food Near Me,” etc. They don’t want to answer the hard questions; presumably because it’s harder to sell ads for more niche questions and topics. And “harder” means you have to get humans involved. Which is why they’re complaining now that users are asking questions that are “too hard for our poor widdle generative AI to handle :-(”— they don’t want us to ask hard questions.



  • The problem is, the internet has adapted to the Google of a year ago, which means that setting Google search back to 2009 just means that every “SEO hacker” gets to have a field day to get spam to the top of results without any controls to prevent them.

    Google built a search engine optimized for the early internet. Bad actors adapted, to siphon money out of Google traffic. Google adapted to stop them. Bad actors adapted. So began a cat-and-mouse game which ended with the pre-AI Google search we all know and hate today. Through their success, Google has destroyed the internet that was; and all that’s left is whatever this is. No matter what happens next, Google search is toast.



  • Ok. But what benefit would they gain by forcing people into AI search? That’s not rhetorical, I’m legitimately asking. Are you saying this is just about controlling the experience? Because they already did, and all this is doing is weakening that control. It’s certainly not easier or more cost-effective. They’ll get LLM training data from either interface. The other things they shut down cost them development or maintenance or even just server space, but even if they managed 100% adoption of AI search they’ll still need to maintain their old platform as a data source for the AI and for the below-page results. So what financial incentive do they have to push people to a more expensive, less-liked endpoint for that data?