- cross-posted to:
- technology@lemmit.online
just like how lumberjacks get worse at using an axe after leaning on chainsaws.
Edit:
just to be a little less facetious, i’ll note that this is not related to the current ai hype at all. the medical field has been using machine learning for well over two decades at this point, and generally in the form of classifiers rather than generators. you feed it a bunch of x-rays and whether they show, say, lung cancer, and the system will automatically sort out things that look normal from things that don’t. this is a good thing because it means doctors can spend more time with patients. doctors also got worse at manually diagnosing broken bones when x-ray machines became common.Edit 2:
a classifier basically just cooks an image down to some basic characteristics, then places it on a graph, and checks if it’s above or below a line it has used other images to refine. it looks like this:say blue dots, are images that don’t show lung cancer, and red dots are images that do. where they end up in the graph is based some amount of factors that are determined by a medical professional. it doesn’t have to be 2D, it can be any number of dimensions. then, using one or more of the methods in the graph, the machine learning algorithm figures out where to draw the line between blue and red. then, when you feed in a new image, it can tell you whether it’s definitely in the blue area, and therefore normal, or maybe in the red area, and therefore worth a closer look by a doctor.
Summary of my comment: the study showed that the AI tool in question was an effective tool for the task, nothing more.
I didn’t read this particular article, but I recently read a different one about the same study. I also clicked into the study itself and read the abstract and everything else that was freely available. The study was paywalled, but as far as I could tell:
- Performance immediately displayed a sustained increase of 24% relative to baseline while using the AI tool in question
- Immediately after the tool was taken away (after using it for three months), performance was 20% lower than the baseline
- The study did not check to see what level performance returned to after three months without it, nor when it returned to baseline levels
- The study also did not compare performance drops after returning from a three month vacation
- The study did not compare performance drops when losing access to other tools
This outcome is expected if given a tool that simplifies a process and then losing access to it. If I were writing code in Notepad and using _v2, _v3, etc for versioning, was then given an IDE and git for three months, then had to go back to my old ways with Notepad, I’d expect to be less effective than I had been. I’ve been relying on syntax highlighting, so I’m going to be paying less attention to the specific monochrome text than I used to. I’ll have fallen out of practice from using the version naming techniques that I used to use. All of the stuff that I did to make up for having worse tooling, I’m out of practice with.
But that doesn’t mean that I should use worse tools.