• NewOldGuard@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    18 hours ago

    The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.

      • NewOldGuard@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        It depends on the model but I’ve seen image generators range from 8.6 wH per image to over 100 wH per image. Parameter count and quantization make a huge difference there. Regardless, even at 10 wH per image that’s not nothing, especially given that most ML image generation workflows involve batch generation of 9 or 10 images. It’s several orders of magnitude less energy intensive than training and fine tuning, but it is not nothing by any means.