Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. the precise timeline of how OpenAI fucked over the RAM market

the precise timeline of how OpenAI fucked over the RAM market

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
22 Indlæg 18 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • davidgerard@circumstances.runD davidgerard@circumstances.run

    the precise timeline of how OpenAI fucked over the RAM market

    > October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.

    https://xcancel.com/aakashgupta/status/2038813799856374135

    kjhank@social.vivaldi.netK This user is from outside of this forum
    kjhank@social.vivaldi.netK This user is from outside of this forum
    kjhank@social.vivaldi.net
    wrote sidst redigeret af
    #3

    @davidgerard There was this piece about it last year: https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram-deal

    1 Reply Last reply
    0
    • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

      @davidgerard thanks sam!

      billsaysthis@curmudgeon.cafeB This user is from outside of this forum
      billsaysthis@curmudgeon.cafeB This user is from outside of this forum
      billsaysthis@curmudgeon.cafe
      wrote sidst redigeret af
      #4

      @ariadne @davidgerard “Google publishes TurboQuant, a compression algorithm that reduces AI memory requirements by 6x with zero accuracy loss.”

      This algorithm is somehow only applicable to AI??

      ariadne@social.treehouse.systemsA demofox@mastodon.gamedev.placeD 2 Replies Last reply
      0
      • billsaysthis@curmudgeon.cafeB billsaysthis@curmudgeon.cafe

        @ariadne @davidgerard “Google publishes TurboQuant, a compression algorithm that reduces AI memory requirements by 6x with zero accuracy loss.”

        This algorithm is somehow only applicable to AI??

        ariadne@social.treehouse.systemsA This user is from outside of this forum
        ariadne@social.treehouse.systemsA This user is from outside of this forum
        ariadne@social.treehouse.systems
        wrote sidst redigeret af
        #5

        @BillSaysThis @davidgerard yes, it is possible to create domain-specific compression algorithms that are better than general ones.

        vorsos@beige.partyV 1 Reply Last reply
        0
        • jwcph@helvede.netJ jwcph@helvede.net shared this topic
        • davidgerard@circumstances.runD davidgerard@circumstances.run

          the precise timeline of how OpenAI fucked over the RAM market

          > October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.

          https://xcancel.com/aakashgupta/status/2038813799856374135

          evoscale@c.imE This user is from outside of this forum
          evoscale@c.imE This user is from outside of this forum
          evoscale@c.im
          wrote sidst redigeret af
          #6

          @davidgerard Since he's obviously alt Man, is he pro Caveman? Cuz we be headin' back there with his ilk in power.

          1 Reply Last reply
          0
          • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

            @BillSaysThis @davidgerard yes, it is possible to create domain-specific compression algorithms that are better than general ones.

            vorsos@beige.partyV This user is from outside of this forum
            vorsos@beige.partyV This user is from outside of this forum
            vorsos@beige.party
            wrote sidst redigeret af
            #7

            @ariadne @BillSaysThis @davidgerard Really? I’ve been using pngcrush for audio files.

            gnuplusknoppers@troet.cafeG qgustavor@urusai.socialQ cppguy@infosec.spaceC 3 Replies Last reply
            0
            • billsaysthis@curmudgeon.cafeB billsaysthis@curmudgeon.cafe

              @ariadne @davidgerard “Google publishes TurboQuant, a compression algorithm that reduces AI memory requirements by 6x with zero accuracy loss.”

              This algorithm is somehow only applicable to AI??

              demofox@mastodon.gamedev.placeD This user is from outside of this forum
              demofox@mastodon.gamedev.placeD This user is from outside of this forum
              demofox@mastodon.gamedev.place
              wrote sidst redigeret af
              #8

              @BillSaysThis @ariadne @davidgerard if so, it's because they were doing something stupid and this fixes that IMO.

              davidgerard@circumstances.runD 1 Reply Last reply
              0
              • demofox@mastodon.gamedev.placeD demofox@mastodon.gamedev.place

                @BillSaysThis @ariadne @davidgerard if so, it's because they were doing something stupid and this fixes that IMO.

                davidgerard@circumstances.runD This user is from outside of this forum
                davidgerard@circumstances.runD This user is from outside of this forum
                davidgerard@circumstances.run
                wrote sidst redigeret af
                #9

                @demofox @BillSaysThis @ariadne yeah I'd be slightly interested in the details, but also only slightly because (a) if it were applicable anywhere else we'd all know about it (b) we're far enough up and along the S curve i can see 6x the memory giving only a slight improvement. Maybe plain ML can benefit a lot, I dunno.

                jnkrtech@social.treehouse.systemsJ abucci@buc.ciA reflex@retrogaming.socialR 3 Replies Last reply
                0
                • davidgerard@circumstances.runD davidgerard@circumstances.run

                  the precise timeline of how OpenAI fucked over the RAM market

                  > October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.

                  https://xcancel.com/aakashgupta/status/2038813799856374135

                  phl@mastodon.socialP This user is from outside of this forum
                  phl@mastodon.socialP This user is from outside of this forum
                  phl@mastodon.social
                  wrote sidst redigeret af
                  #10

                  @davidgerard Fuck, and I say this without any reservation whatsoever, Sam Altman.

                  1 Reply Last reply
                  0
                  • vorsos@beige.partyV vorsos@beige.party

                    @ariadne @BillSaysThis @davidgerard Really? I’ve been using pngcrush for audio files.

                    gnuplusknoppers@troet.cafeG This user is from outside of this forum
                    gnuplusknoppers@troet.cafeG This user is from outside of this forum
                    gnuplusknoppers@troet.cafe
                    wrote sidst redigeret af
                    #11

                    @Vorsos @ariadne @BillSaysThis @davidgerard so?

                    1 Reply Last reply
                    0
                    • davidgerard@circumstances.runD davidgerard@circumstances.run

                      the precise timeline of how OpenAI fucked over the RAM market

                      > October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.

                      https://xcancel.com/aakashgupta/status/2038813799856374135

                      fritzadalis@infosec.exchangeF This user is from outside of this forum
                      fritzadalis@infosec.exchangeF This user is from outside of this forum
                      fritzadalis@infosec.exchange
                      wrote sidst redigeret af
                      #12

                      @davidgerard @ariadne
                      So glad memory is cheap again now. Wait, what?

                      1 Reply Last reply
                      0
                      • davidgerard@circumstances.runD davidgerard@circumstances.run

                        @demofox @BillSaysThis @ariadne yeah I'd be slightly interested in the details, but also only slightly because (a) if it were applicable anywhere else we'd all know about it (b) we're far enough up and along the S curve i can see 6x the memory giving only a slight improvement. Maybe plain ML can benefit a lot, I dunno.

                        jnkrtech@social.treehouse.systemsJ This user is from outside of this forum
                        jnkrtech@social.treehouse.systemsJ This user is from outside of this forum
                        jnkrtech@social.treehouse.systems
                        wrote sidst redigeret af
                        #13

                        @davidgerard @demofox @BillSaysThis @ariadne my question is just whether this will make RAM less expensive. I’m guessing “no”, because that would be a good thing, and it seems increasingly likely that we can’t have those.

                        ariadne@social.treehouse.systemsA 1 Reply Last reply
                        0
                        • jnkrtech@social.treehouse.systemsJ jnkrtech@social.treehouse.systems

                          @davidgerard @demofox @BillSaysThis @ariadne my question is just whether this will make RAM less expensive. I’m guessing “no”, because that would be a good thing, and it seems increasingly likely that we can’t have those.

                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                          ariadne@social.treehouse.systems
                          wrote sidst redigeret af
                          #14

                          @davidgerard @demofox @BillSaysThis @jnkrtech it did not

                          jnkrtech@social.treehouse.systemsJ 1 Reply Last reply
                          0
                          • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                            @davidgerard @demofox @BillSaysThis @jnkrtech it did not

                            jnkrtech@social.treehouse.systemsJ This user is from outside of this forum
                            jnkrtech@social.treehouse.systemsJ This user is from outside of this forum
                            jnkrtech@social.treehouse.systems
                            wrote sidst redigeret af
                            #15

                            @ariadne @davidgerard @demofox @BillSaysThis 🥲

                            1 Reply Last reply
                            0
                            • davidgerard@circumstances.runD davidgerard@circumstances.run

                              the precise timeline of how OpenAI fucked over the RAM market

                              > October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.

                              https://xcancel.com/aakashgupta/status/2038813799856374135

                              rogerb@mastodon.scotR This user is from outside of this forum
                              rogerb@mastodon.scotR This user is from outside of this forum
                              rogerb@mastodon.scot
                              wrote sidst redigeret af
                              #16

                              @davidgerard
                              Assuming he got a fixed price as part of the deal...
                              he can now sell them on and make a tidy profit, hence boosting OpenAI's numbers for the next investment round and/or going public.

                              1 Reply Last reply
                              0
                              • davidgerard@circumstances.runD davidgerard@circumstances.run

                                @demofox @BillSaysThis @ariadne yeah I'd be slightly interested in the details, but also only slightly because (a) if it were applicable anywhere else we'd all know about it (b) we're far enough up and along the S curve i can see 6x the memory giving only a slight improvement. Maybe plain ML can benefit a lot, I dunno.

                                abucci@buc.ciA This user is from outside of this forum
                                abucci@buc.ciA This user is from outside of this forum
                                abucci@buc.ci
                                wrote sidst redigeret af
                                #17
                                @davidgerard@circumstances.run @demofox@mastodon.gamedev.place @BillSaysThis@curmudgeon.cafe @ariadne@treehouse.systems A couple points, bearing in mind that this is the first time I'm encountering TurboQuant and might be misspeaking:

                                • This is perhaps neither here nor there, but the X account making the originally-quoted post is https://www.aibyaakash.com , "AI by Aakash" (this is linked later in the same thread). The person seems fully AI-pilled and has several AI-themed substacks
                                • TurboQuant, or at least the QJL bit, sounds suspiciously like Locality-Sensitive Hashing. That's a well-known technique, and it can definitely do impressive things. When I tried my hand at startups I made heavy use of it (see https://bucci.onl/notes/Legit-tech ). In my use case I could get something like a 1,000-fold compression with acceptable accuracy loss. Basically LSH can be used to turn a long vector of floats into a comparatively short bitstring without losing too much of the geometrical information in the float vectors. Even one bit packs a ton of information
                                • The general problem of vector search that this method aims to address is an old one, and rotating or compressing the vectors is nothing new. In old school linear algebra things like diagonalization or SVD do this, for instance. I don't know if that's what they're doing but it's a general class of technique and a straightforward thing to try
                                • Vector quantization is, of course, also quite old. You experience it every time you listen to an MP3.
                                So, it's possible this is a characteristic Google move of taking existing science, ramming it through their engineering machine, and suggesting novelty with a clever title, headline, and/or new name. Which is not to suggest it's a bad piece of engineering. I couldn't say. However, it's possible this is a Google rebrand, and the questions raised in this thread, like "wouldn't we already know about this? wouldn't it be applied outside of AI?" are answered by: yes, we did already know about this and yes, it has already been applied outside of AI. Oh and yes, it'd be quite silly if nobody thought to try these old school techniques in the latest incarnation of LLM-based AI before 2026.
                                1 Reply Last reply
                                0
                                • davidgerard@circumstances.runD davidgerard@circumstances.run

                                  @demofox @BillSaysThis @ariadne yeah I'd be slightly interested in the details, but also only slightly because (a) if it were applicable anywhere else we'd all know about it (b) we're far enough up and along the S curve i can see 6x the memory giving only a slight improvement. Maybe plain ML can benefit a lot, I dunno.

                                  reflex@retrogaming.socialR This user is from outside of this forum
                                  reflex@retrogaming.socialR This user is from outside of this forum
                                  reflex@retrogaming.social
                                  wrote sidst redigeret af
                                  #18

                                  @davidgerard @demofox @BillSaysThis @ariadne UFD Tech discussed it the other day and it only applies to a very specific aspect of AI resulting in a tiny overall shrink in memory consumption that's being used to load slightly larger models. And it started being used middle of last year, meaning it's already baked in.

                                  1 Reply Last reply
                                  0
                                  • vorsos@beige.partyV vorsos@beige.party

                                    @ariadne @BillSaysThis @davidgerard Really? I’ve been using pngcrush for audio files.

                                    qgustavor@urusai.socialQ This user is from outside of this forum
                                    qgustavor@urusai.socialQ This user is from outside of this forum
                                    qgustavor@urusai.social
                                    wrote sidst redigeret af
                                    #19

                                    @Vorsos @ariadne @BillSaysThis @davidgerard Reminds me of when I took a bunch of manga PNG, converted then to BMP and compressed all back using 7z and the resulting file was smaller than compressing the original PNGs using 7z

                                    1 Reply Last reply
                                    0
                                    • vorsos@beige.partyV vorsos@beige.party

                                      @ariadne @BillSaysThis @davidgerard Really? I’ve been using pngcrush for audio files.

                                      cppguy@infosec.spaceC This user is from outside of this forum
                                      cppguy@infosec.spaceC This user is from outside of this forum
                                      cppguy@infosec.space
                                      wrote sidst redigeret af
                                      #20

                                      @Vorsos

                                      I can't tell if you're serious, but Ariadne is right. Simple example: Flac will losslessly compress audio better than zip or gzip will. That's why it was invented. 😄

                                      @ariadne @BillSaysThis @davidgerard

                                      K 1 Reply Last reply
                                      0
                                      • cppguy@infosec.spaceC cppguy@infosec.space

                                        @Vorsos

                                        I can't tell if you're serious, but Ariadne is right. Simple example: Flac will losslessly compress audio better than zip or gzip will. That's why it was invented. 😄

                                        @ariadne @BillSaysThis @davidgerard

                                        K This user is from outside of this forum
                                        K This user is from outside of this forum
                                        katlin@mastodon.social
                                        wrote sidst redigeret af
                                        #21

                                        @CppGuy @Vorsos @ariadne @BillSaysThis @davidgerard

                                        Interestingly enough, Chinchilla 70B was trained mostly on text and beat domain-specific compressors PNG and FLAC in one experiment.

                                        https://arxiv.org/abs/2309.10668

                                        Not saying you are wrong. I assume that newer, domain-specific algorithms would still outperform the general Chinchilla algorithm, and there can be practical downsides if they involve large memory requirements, even if they result in more efficient compression.

                                        1 Reply Last reply
                                        0
                                        • davidgerard@circumstances.runD davidgerard@circumstances.run

                                          the precise timeline of how OpenAI fucked over the RAM market

                                          > October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.

                                          https://xcancel.com/aakashgupta/status/2038813799856374135

                                          djgummikuh@mastodon.socialD This user is from outside of this forum
                                          djgummikuh@mastodon.socialD This user is from outside of this forum
                                          djgummikuh@mastodon.social
                                          wrote sidst redigeret af
                                          #22

                                          @davidgerard I wonder though, if the demand side is collapsing this quickly, why isn't the price following? "Analysts expect elevated prices until 2028" are they lying? Trying to protect their investment? Or is there more at play than Altmans eccentrism?

                                          1 Reply Last reply
                                          0
                                          • tokeriis@helvede.netT tokeriis@helvede.net shared this topic
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper