Yes. For example, when an AI-based system is hyped or deployed, very often it is touted as an improvement over whatever the previous (human-involving) system was, because supposedly it will “make objective decisions”, “without prejudices”, “based on cold data”, and so on.
Each instance of this needs to be called out, every time, because these decisions end up being anything but.
Generally people believe that math, numbers, and data, cannot have implicit bias, yeah. But algorithms? I think it’s common knowledge that algorithms are bias and bad across any semi-knowledgeable community.
That is sadly very much not the case. As in, there are loads of people, including in “semi-knowledgeable communities”, that do not understand that algorithms are designed by people, for a specific purpose, and thus replicate their biases — even if their designers try not to make that happen.
I think people wouldn’t say that if you ask them, but it’s for sure the case that people will be more likely to believe something is scientific and true if you:
show a list of numbers, then claim the data shows xyz thing you’re claiming, regardless of whether it actually shows that or you’re actually making a lot of assumptions
show a chart, even if the axis aren’t marked clear, or are scaled in a way to make it look more impressive
can point at 1 study and say they detected a significant difference, even though statistical significance (this observed effect is less than 5% likely to be caused by pure coincidence) is not the same thing as the usual understanding of significance (this effect is large/important), and even though 1 study on its own is statistically likely to be biased or not even a little bit replicable.
show your math, even if it’s bad math or doesn’t represent what you say it does, because most people will assume that someone who shows their math knows what they’re doing.
Anyway I think this sort of thing is why companies will claim something like “9 out of 10 doctors recommend x” or “people who used our product had a 300% greater chance of getting into college” or “our breakfast cereal is up to 70% healthier than competing brands” or whatever, instead of like “most doctors recommend” or “people who use our product are much more likely to get into college” or “our cereal is much healthier than competing brands”.
Even though the former claims are typically based on absolute nonsense and misleading interpretations of company-funded studies, and aren’t actually any more specific or reliable than the latter, and sometimes don’t even make sense - 70% healthier? How do you measure health as a percentage? - they look more specific and scientific/authoritative to people just by having numbers in them.
I agree with the overall message of the article but I’ve never met anyone who said that math is by default unbiased. Is that misconception common?
Yes. For example, when an AI-based system is hyped or deployed, very often it is touted as an improvement over whatever the previous (human-involving) system was, because supposedly it will “make objective decisions”, “without prejudices”, “based on cold data”, and so on.
Each instance of this needs to be called out, every time, because these decisions end up being anything but.
Oh yeah that’s a good point! Thank you
Generally people believe that math, numbers, and data, cannot have implicit bias, yeah. But algorithms? I think it’s common knowledge that algorithms are bias and bad across any semi-knowledgeable community.
That is sadly very much not the case. As in, there are loads of people, including in “semi-knowledgeable communities”, that do not understand that algorithms are designed by people, for a specific purpose, and thus replicate their biases — even if their designers try not to make that happen.
I think people wouldn’t say that if you ask them, but it’s for sure the case that people will be more likely to believe something is scientific and true if you:
show a list of numbers, then claim the data shows xyz thing you’re claiming, regardless of whether it actually shows that or you’re actually making a lot of assumptions
show a chart, even if the axis aren’t marked clear, or are scaled in a way to make it look more impressive
can point at 1 study and say they detected a significant difference, even though statistical significance (this observed effect is less than 5% likely to be caused by pure coincidence) is not the same thing as the usual understanding of significance (this effect is large/important), and even though 1 study on its own is statistically likely to be biased or not even a little bit replicable.
show your math, even if it’s bad math or doesn’t represent what you say it does, because most people will assume that someone who shows their math knows what they’re doing.
Anyway I think this sort of thing is why companies will claim something like “9 out of 10 doctors recommend x” or “people who used our product had a 300% greater chance of getting into college” or “our breakfast cereal is up to 70% healthier than competing brands” or whatever, instead of like “most doctors recommend” or “people who use our product are much more likely to get into college” or “our cereal is much healthier than competing brands”.
Even though the former claims are typically based on absolute nonsense and misleading interpretations of company-funded studies, and aren’t actually any more specific or reliable than the latter, and sometimes don’t even make sense - 70% healthier? How do you measure health as a percentage? - they look more specific and scientific/authoritative to people just by having numbers in them.