Stanford Internet Observatory has made a distressing discovery: over 1,000 fake child sexual abuse images in LAION-5B, a dataset used for training AI image generators. This finding, made public in April, has raised serious concerns about the sources and methods used for compiling AI training materials. LAION-5B, associated with London-based Stability AI’s Stable Diffusion AI