• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: August 4th, 2023

help-circle










  • I’m not assuming it’s going to fail, I’m just saying that the exponential gains seen in early computing are going to be much harder to come by because we’re not starting from the same grossly inefficient place.

    As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I’m not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I’m aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.



  • There’s probably a bunch of reasons for the multi wing design, but the big one is going to be improving lift/carrying capacity without increasing the width.

    The most efficient wings for low speeds are glider wings: as long and thin as possible. That makes them inconvenient to pack and folding joints are weak points. The second wing adds lift, but also problems: it’s less efficient than a single wing of the combined length would be and the front wing makes the rear wing less efficient. The winglet improves the situation somewhat. Facing downward also improves maneuverability.




  • TCP has been amended in backwards incompatible ways multiple times since 1993. See e.g. RFCs 5681, 2675, and 7323 as examples.

    Plus, speaking TCP/IP isn’t enough to let you to use the web, which is what most people think of when you say “Internet”. That 1993 device is going to have trouble speaking HTTP/1.1 (or 1.0 if you’re brave) to load even the most basic websites and no, writing the requests by hand doesn’t count.




  • I haven’t explained what the differences are because almost everything is different. It’s like comparing a Model T to a Bugatti. They’re simply not built the same way, even if they both use internal combustion engines and gearboxes.

    Let me give you an overview of how the research pipeline occurs though. First is the fundamental research, which outside of semiconductors is usually funded by public sources. This encompasses things like methods of crack formation in glasses, better solid state models, improved error correction algorithms and so on. The next layer up is applied research, where the fundamental research is applied to improve or optimize existing solutions / create new partial solutions to unsolved problems. Funding here is a mix of private and public depending on the specific area. Semiconductor companies do lots of their own original research here as well, as you can see from these Micron and TSMC memory research pages. It’s very common for researchers who are publicly funded here to take that research and use it to go start a private company, usually with funding from their institution. This is where many important semiconductor companies have their roots, including TSMC via ITRI. These companies in turn invest in product / highly applied research aimed at productizing the research for the mass market. Sometimes this is easy, sometimes it’s extremely difficult. Most of the challenges of EUV lithography occurred here, because going from low yield academic research to high yield commercial feasibility was extremely difficult. Direct investment here is almost always private, though there can be significant public investments through companies. If this is published (it often isn’t), it’s commonly done as patents. Every company you’ve heard of has thousands of these patents, and some of the larger ones have tens or hundreds of thousands. All of that is the result of internal research. Lastly, they’ll take all of that, build standards (e.g. DDR5, h.265, 5G), and develop commercial implementations that actually do those things. That’s what OEMs buy (or try to develop on their own in the case of Apple modems) to integrate into their products.


  • You have no idea how modern technology is produced. Any particular product is usually the result of dozens to thousands of iterations, some funded with public money and many not. Let’s take an example from your chart: DRAM. I actually don’t know when DARPA “developed” DRAM (since DARPA usually funds private companies to do development for them), but it must have been before 1970 when Intel designed the 1103 chip that got them started. Do you think that pre-1970s design is remotely similar to the DRAM operating on your device today? I’ll give you a hint: it’s not.

    And no, modern device development does not consist of gluing a bunch of APIs together. Apple maintains its own compilers, languages, toolchains, runtimes, hardware, operating systems, debugging tools, and so on. Some of that code had distant origins in open source (e.g. webkit), but that’s vastly different than publicly funded and those components are usually very different today.

    They’re failing to produce competitive modems because modern wireless is one of closest things humans have to straight up black magic. It’s extremely difficult to get right, especially as frequencies go up, SNR goes down, and we try to push things ever faster despite having effectively reached the Shannon limit ages ago.