Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
madhu_shrieks@mastinsaan.inM

madhu_shrieks@mastinsaan.in

@madhu_shrieks@mastinsaan.in
About
Indlæg
1
Emner
0
Fremhævelser
0
Grupper
0
Følgere
0
Følger
0

Vis Original

Indlæg

Seneste Bedste Controversial

  • If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
    madhu_shrieks@mastinsaan.inM madhu_shrieks@mastinsaan.in

    @xrisk @mehluv might be able to provide more insight on this, but at least when I was writing content and AI was getting integrated into our work, the expectation was to review high volume of written content much faster for our editors. And we fully made many fuck ups due to that, because it is overwhelming. I assume this might also be the case, but I might be fully wrong. It is not just that the amount of code written is high volume, but also the expected pace of reviewing also is accelerated. Because what is the point of automating stuff if the reviewing process neutralizes the gains?

    Ikke-kategoriseret llm
  • Log ind

  • Har du ikke en konto? Tilmeld

  • Login or register to search.
Powered by NodeBB Contributors
Graciously hosted by data.coop
  • First post
    Last post
0
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper