• 12 Posts
  • 128 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • Technically, it can and has been done already. The problem is that AI is very bad at creating new ideas and even worse at understanding what it has created (as is required for plots or jokes). As a result, any writting created with heavy AI influence tends to sound like a child’s stream of thought with an adult’s vocabulary, and any jokes rely purely on randomness or on repeating an existing well-known joke. Similarly with art and animation, because the AI doesn’t understand what it is creating, it struggles to keep animation of elements consistant and often can’t figure out how elements should be included in the scene. Voices are probably the strongest part, but even then, it can be buggy and won’t change correctly to match the context of what is being said.

    None of this is to say AI is useless. Its very good at creating a “good enough” quick-fix, or to be used to fill unimportant or trivial work. If used to help clean up scripts or fill in backgrounds, it can speed up the process greatly at minimal cost. It’s a tool to be used by someone who knows the field, not to replace them.


  • In general, I agree, but I think you underestimate the benifits it provides. While ray-tracing doesn’t add much to more static or simple scenes, it can make a huge difference with more complex or dynamic scenes. Half Life 2 is honestly probably the ideal game to demonstrate this due to its heavy reliance on physics. Current lighting and reflection systems, for all their advancements and advantages, struggle to convincingly handle objects moving in the scene and interacting with each other. Add in a flickering torch or similar and things tend to go even further off the rails. This is why in a lot of games, interactive objects end up standing out in an otherwise well-rendered enviroment. Good raytracing fixes this and can go a really long way to creating a unified, but dynamic look to an enviroment. All that is just on the player’s side too, theres even more boons for developers.

    That said, I still don’t plan to be playing many RTX or ray-traced games any time soon. As you said, its still a nightmare performance wise, and I personally start getting motion sick at the framerates it runs at. Once hardware catches up more seriously, I think it will be a really useful tool.


  • A couple of major factors:

    Users who expect low prices - This partly because of the history of mobile games being smaller and/or ad-funded but also because the vast majority of people playing games on their phone are looking for a low barrier to entry, time waster, not specifically a game.

    Lack of regulation or enforcement - other gambling heavy fields tend to be at least somewhat regulated, but mobile games are very light on regulation, and even lighter on enforcement. This allows them to falsely advertise their games and how they function (both in terms of misleading ads, and lying about chance based events and purchases in-game).

    Monopolistic middlemen - On other platforms, theres more direct competition (IE, Sony and Microsoft’s generally more direct competition) or companies that prioritize long-term growth and stability (IE Steam or Itch.io). Apple and Google, on the other hand, largely compete on brand perception and hardware specs. These means that their app stores, where they make most of their money, have zero competitors. Seeing as they have no reason to make the stores better, they can instead promote whatever makes them the most money; that being exactly these manipulate, sketchy, virtual slot machines.


  • I think it is technically possible - with the Valve Index you can read the camera input like a webcam, and I’m sure theres some way to do it with the Quests (although probably not easily). That said, as others have noted, between the bulkyness of the headset, the lower quality of the cameras, the risk of losing tracking, and the natural shakyness of people’s heads, it likely wouldn’t be an improvement. Try watching VR footage from someone who doesn’t stream/video it regularly and you can get an idea of how hard the footage can be to follow, even before the lower camera quality.










  • Relative to cooking a similar meal, absolutely. Getting McDonalds takes like 5 mins and almost no effort. Less if ordering for delivery or pickup. If I want to cook myself a burger its probably going to take me like 40 minutes to makes and fry the burger, and prepare toppings. Im sure a good chef could do it much faster, but thats not me, and esspecially not after a full work day.

    Edit: Plus, less directly measurable and comparable, but the time and work for planning, shopping, and dishes afterwards.







  • It absolutely can, but doesn’t always. For example, Gamer’s Nexus is well respected for their thorough and unbiased research and journalism. It would be extremely difficult for them to do so without ads and merch sales, as any products reviewed must be purchased, testing equipment needs to bought, and experts need to be hired to use said equipment. Until capitalism ceases to exist, most people who make stuff will need to find a way to fund their work, from paint brushes to high-end testing equipment. If we can’t accept this, we will rarely get creators willing to provide quality content, and what we do get will be biased towards those with the money to burn.


  • If used in the specific niche use cases its trained for, as long as its used as a tool and not a final product. For example, using AI to generate background elements of a complete image. The AI elements aren’t the focus, and should be things that shouldn’t matter, but it might be better to use an AI element rather than doing a bare minimum element by hand. This might be something like a blurred out environment background behind a peice of hand drawn character art - otherwise it might just be a gradient or solid colour because it isn’t important, but having something low-quality is better than having effectively nothing.

    In a similar case, for multidisciplinary projects where the artists can’t realistically work proficiently in every field required, AI assets may be good enough to meet the minimum requirements to at least complete the project. For example, I do a lot of game modding - I’m proficient with programming, game/level design, and 3D modeling, but not good enough to make dozens of textures and sounds that are up to snuff. I might be able to dedicate time to make a couple of most key resources myself or hire someone, but seeing as this is a non-commercial, non-monitized project I can’t buy resources regularly. AI can be a good enough solution to get the project out the door.

    In the same way, LLM tools can be good if used as a way to “extend” existing works. Its a generally bad idea to rely entirely on them, but if you use it to polish a sentence you wrote, come up with phrasing ideas, or write your long if-chain for you, then it’s a way of improving or speeding up your work.

    Basically, AI tools as they are, should be seen as another tool by those in or adjacent to the related profession - another tool in the toolbox rather than a way to replace the human.