• 5 Posts
  • 47 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle



  • The generic abyss of artificial intelligence | John R. Gallagher

    All this business talk from CEOs about AI automating work comes down to them not valuing the input of workers. You can hear the jubilant ejaculative rhetoric about robots because robots represent firing all the workers. CEOs see their workers as interchangeable laborers who fit inside of templates. They want workers who pull the levers of templates. They’ve always wanted this since the individual revolution. But now the templates are no longer physical commodities but instead our stories, our genres.

    Call it template capitalism. Social media companies are already operating under this logic through the templates they force on users. As the car companies have done by forcing drivers into templates. Or shoe companies have accomplished with standard sizes. There’s nothing stopping the knowledge sectors of the economy from extending that logic to workers. Knowledge workers are being deskilled by making them obey the generic templates of LLMs.

    Template capitalism hollows out the judgment of individual knowledge workers by replacing slowly accreted genre experiences with the summed average of all genres. Under this system, knowledge workers merely ensure the machines don’t make errors (or what the AI companies have just relabeled “hallucinations”). The nuance of situated knowledge evaporates, leaving behind procedural obedience. The erosion of individual judgment is the point. Workers who diverge from the ordained path of LLMs are expendable. If you challenge the templates, you get fired.

    They’ve always wanted this, indeed. There’s some comfort to me in the reminder that this year’s layoffs are no different than the last cycle, except maybe the excuses are thinner.













  • Apologies for doing journal club instead of sneer club.

    Voiseux, G., Tao Zhou, R., & Huang, H.-C. (Brad). (2025). Accepting the unacceptable in the AI era: When & how AI recommendations drive unethical decisions in organizations. Behavioral Science & Policy, 0(0). https://doi.org/10.1177/23794607251384574

    abstract:

    In today’s workplaces, the promise of AI recommendations must be balanced against possible risks. We conducted an experiment to better understand when and how ethical concerns could arise. In total, 379 managers made either one or multiple organizational decisions with input from a human or AI source. We found that, when making multiple, simultaneous decisions, managers who received AI recommendations were more likely to exhibit lowered moral awareness, meaning reduced recognition of a situation’s moral or ethical implications, compared with those receiving human guidance. This tendency did not occur when making a single decision. In supplemental experiments, we found that receiving AI recommendations on multiple decisions increased the likelihood of making a less ethical choice. These findings highlight the importance of developing organizational policies that mitigate ethical risks posed by using AI in decision-making. Such policies could, for example, nudge employees toward recalling ethical guidelines or reduce the volume of decisions that are made simultaneously.

    so is the moral decline a side effect, or technocapitalism working as designed.