Why do I still have to work my boring job while AI gets to create art and look at boobs?
Because life is suffering and machines dream of electric sheeps.
I’ve seen things you people wouldn’t believe.
I dream of boobs.
Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough
It’s already this way in most of the world.
Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries
For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.
Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.
I think it’s free in most of Europe, or relatively cheap
Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.
Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.
But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.
That’s actually really smart. But that info wasn’t given to doctors examining the scan, so it’s not a fair comparison. It’s a valid diagnostic technique to focus on the particular problems in the local area.
“When you hear hoofbeats, think horses not zebras” (outside of Africa)
AI is weird. It may not have been given the information explicitly. Instead it could be an artifact in the scan itself due to the different equipment. Like if one scan was lower resolution than the others but you resized all of the scans to be the same size as the lowest one the AI might be picking up on the resizing artifacts which are not present in the lower resolution one.
I’m saying that info is readily available to doctors in real life. They are literally in the hospital and know what the socioeconomic background of the patient is. In real life they would be able to guess the same.
The manufacturing date of the scanner was actually saved as embedded metadata to the scan files themselves. None of the researchers considered that to be a thing until after the experiment when they found that it was THE thing that the machines looked at.
That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.
Keep the human in the loop!
Not at all, in this case.
A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.
Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.
Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.
That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.
Nearly 4 out of 5 that progress to biopsy are benign. Nearly 4 times that are called for additional evaluation. The false positives are quite high compared to other imaging. It is designed that way, to decrease the chances of a false negative.
The false negative rate is also quite high. It will miss about 1 in 5 women with cancer. The reality is mammography is just not all that powerful as a screening tool. That’s why the criteria for who gets screened and how often has been tailored to try and ensure the benefits outweigh the risks. Although it is an ongoing debate in the medical community to determine just exactly what those criteria should be.
How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.
It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.
Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.
Well it’d certainly benefit the medical industry. They’d be saddling tons of patients with surgeries, chemotherapy, mastectomy, and other treatments, “because doctor-GPT said so.”
But imagine being a patient getting physically and emotionally altered, plunged into irrecoverable debt, distressing your family, and it all being a whoopsy by some black-box software.
That’s a good point, that it could burden the system, but why would you ever put someone on chemotherapy for the model described in the paper? It seems more like it could burden the system by increasing the number of patients doing more frequent screening. Someone has to pay for all those docter-patient and meeting hours for sure. But the benefit outweighs this cost (which in my opinion is good and cheap since it prevents future treatment at later stages that are expensive).
Biopsies are small but still invasive. There’s risk of infection or reactions to anesthesia in any surgery. If 100 million women get this test, a 5% false positive rate will mean 5 million unnecessary interventions. Not to mention the stress of being told you have cancer.
5 million unnecessary interventions means a small percentage of those people (thousands) will die or be harmed by the treatment. That’s the harm that it causes.
You have really good point too! Maybe just an indication of higher risk, and just saying “Hey, screening more often couldn’t hurt.” Might actually be a net positive, and wouldn’t warrant such extreme measures unless it was positively identified by, hopefully, human professionals.
You’re right though, there always seems to be more demand than supply for anything medicine related. Not to mention, here in the U.S for example, needless extra screenings could also heavily impact a lot of people.
There’s a lot to be considered here.
deleted by creator
The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.
good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.
Under no circumstance should we accept a “black box” explanation.
Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.
Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s
Hey look, this took me like 5 minutes to find.
Censius guide to AI interpretability tools
Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?
Here’s what looks like a university paper on interpretability tools:
As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.
Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.
Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.
Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.
interpretability costs money though :v
iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.
Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.
Link?
This ones from 2019 Link
I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.
It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.
IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.
I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.
I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.
The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.
For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.
In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.
An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.
Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.
Machine learning is a subset of Artificial intelligence, which is a field of research as old as computer science itself
The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field’s long-term goals.[16]
y = w^T x
hope this helps!
our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)
It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.
What a vague and unprovable thing you’ve stated there.
If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.
Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.
Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.
an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!
This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.
Nice comment. I like the detail.
For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.
You in QA?
HAHAHAHA thank fuck I am not
Ok, I’ll concede. Finally a good use for AI. Fuck cancer.
It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.
The hypesters and grifters do not prevent AI from being used for truly valuable things even now. In fact medical uses will be one of those things that WILL keep AI from just fading away.
Just look at those marketing wankers as a cherry on the top that you didn’t want or need.
People just need to understand that the true medical uses are as tools for physicians, not “replacements” for physicians.
I think the vast majority of people understand that already. They don’t understand just what all those gadgets are for anyway. Medicine is largely a ''blackbox" or magical process anyway.
There are way too many techbros trying to push the idea of turning chat gpt into a physician replacement. After it “passed” the board exams, they immediately started hollering about how physicians are outdated and too expensive and we can just replace them with AI. What that ignores is the fact that the board exam is multiple choice and a massive portion of medical student evaluation is on the “art” side of medicine that involves taking the history and performing the physical exam that the question stem provides for the multiple choice questions.
And it has gone exactly nowhere either hasn’t it. Nor do those techbros want the legal and moral responsibilities that come with an actual licence to pass the boards.
I think there are some techbros out there with sleazy legal counsel that promises they can drench the thing in enough terms and conditions to relieve themselves of liability, similar to the way that WebMD does. Also, with healthcare access the way it is in America, there are plenty of people who will skim right past the disclaimer telling them to go see a real healthcare provider and just trust the “AI”. Additionally, there’s enough slimy NP professional groups pushing for unsupervised practice that they could just sign on their NP licenses for prescriptions, and the malpractice laws currently in place would be difficult to enforce depending on outcomes and jurisdictions.
This doesn’t get into the sowing of discord and discontent with physicians that is happening even without these products existing in the first place. Even the claims that an AI could potentially, maybe, someday sorta-kinda replace physicians makes people distrust and dislike physicians now.
Separately, I have some gullible classmates in medical school that I worry about quite a lot, because they’ve bought into the line that chat GPT passed the boards, so they take its’ hallucinations as gospel and argue with our professor’s explanations as to why the hallucination is wrong and the correct answer on a test is correct. I was not shy about admonishing them and forcefully explaining how these “generative AIs” are little more than glorified text predictors, but the allure of easy answers without having to dig for them and understand complex underlying principles is very alluring, so I don’t know if I actually got through to him or not.
The hypesters and grifters do not prevent AI from being used for truly valuable things even now.
I mean, yeah, except that the unnecessary applications are all the corporations are paying anyone to do these days. When the hype flies around like this, the C-suite starts trying to micromanage the product team’s roadmap. Once it dies down, they let us get back to work.
Also, for GPU prices to come down. Right now the AI garbage is eating a lot of the GPU production, as well as wasting a ton of energy. It sucks. Right as the crypto stuff started dying out we got AI crap.
A cure for cancer, if it can be literally nipped in the bud, seems like a possible money-maker to me.
It’s a money saver, so it’s profit model is all wonky.
A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
A hospital, as a place that helps people, will still want to use these scans widely because “ignoring preventative care to profit off long term treatment” is a bit too “mask off” even for the US healthcare system and doctors would quit.Insurance companies, however, would pay just shy of the cost of treatment to avoid paying for treatment.
So the cost will rise to be the cost of treatment times the incidence rate, scaled to the likelihood the scan catches something, plus system costs and staff costs.In a sane system, we’d pass a law saying capable facilities must provide preventative screenings at cost where there’s a reasonable chance the scan would provide meaningful information and have the government pay the bill. Everyone’s happy except people who view healthcare as an investment opportunity.
A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
I believe this idea was generally debunked a little while ago; to wit, the profit margin on cancer care just isn’t as big (you have to pay a lot of doctors) as the profit margin on mammograms. Moreover, you’re less likely to actually get paid the later you identify it (because end-of-life care costs for the deceased tend to get settled rather than being paid).
I’ll come back and drop the article link here, if I can find it.
Oh interesting, I’d be happy to be wrong on that. :)
I figured they’d factor the staffing costs into what they charge the insurance, so it’d be more profit due to a higher fixed costs, longer treatment and some fixed percentage profit margin.
The estate costs thing is unfortunately an avenue I hadn’t considered. :/I still think it would be better if we removed the profit incentive entirely, but I’m pleased if the two interests are aligned if we have to have both.
Oh, absolutely. Absent a profit motive that pushes them toward what basically amounts to a protection scam, they’re left with good old fashioned price gouging. Even if interests are aligned, it’s still way more expensive than it should be. So yes, I agree that we should remove the profit incentive for healthcare.
Sadly, I can’t find the article. I’ll keep an eye out for it, though. I’m pretty sure I linked to it somewhere but I’m too terminally online to figure out where.
That’s not what this is, though. This is early detection, which is awesome and super helpful, but way less game-changing than an actual cure.
It’s not a cure in itself, but isn’t early detection a good way to catch it early and in many cases kill it before it spreads?
It sure is. But this is basically just making something that already exists more reliable, not creating something new. Still important, but not as earth-shaking.
Honestly they should go back to calling useful applications ML (that is what it is) since AI is getting such a bad rap.
machine learning is a type of AI. scifi movies just misused the term and now the startups are riding the hype trains. AGI =/= AI. there’s lots of stuff to complain about with ai these days like stable diffusion image generation and LLMs, but the fact that they are AI is simply true.
I mean it’s entirely an arbitrary distinction. AI, for a very long time before chatGPT, meant something like AGI. we didn’t call classification models ‘intelligent’ because it didn’t have any human-like characteristics. It’s as silly as saying a regression model is AI. They aren’t intelligent things.
I once had ideas about building a machine learning program to assist workflows in Emergency Departments, and its’ training data would be entirely generated by the specific ER it’s deployed in. Because of differences in populations, the data is not always readily transferable between departments.
And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.
Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.
Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.
It never makes mistakes that affect diagnosis?
It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.
I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.
I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.
I can do that too, but my rate of success is very low
pretty sure iterate is the wrong word choice there
They probably meant reiterate
I think it’s a joke, like to imply they want to not just reiterate, but rerererereiterate this information, both because it’s good news and also in light of all the sucky ways AI is being used instead. Like at first they typed, "I just want to reiterate… but decided that wasn’t nearly enough.
Common case of programmer brain
That’s not the only issue with the English-esque writing.
100% true, just the first thing that stuck out at me
I suppose they just dropped the “re” off of “reiterate” since they’re saying it for the first time.
Dude needs to use AI to fix his fucking grammar.
This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.
Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.
That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.
I’ve been looking at the paper, some things about it:
- the paper and article are from 2021
- the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
- it needs to combine information from multiple views
- it predicts risk for each year in the next 5 years
- it has to produce consistent results with different sensors and diverse patients
- its not the first model to do this, and it is more accurate than previous methods
Good stuff
It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.
Besides LLMs, large language models, we also have GANs, Generative Adversarial Networks.
https://en.wikipedia.org/wiki/Large_language_model
https://en.wikipedia.org/wiki/Generative_adversarial_network
Kinda mean of you calling Billy precancerous masses like that smh
I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.
Everything machine learning will be called “ai” from now until forever.
It’s like how all rc helicopters and planes are now “drones”
People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.
This is similar to wat I did for my masters, except it was lung cancer.
Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.
Yes, this is “how it was supposed to be used for”.
The sentence construction quality these days in in freefall.
shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.
What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.
There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.
This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?
This is just plain speech, written objectively incorrectly.
But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.
Ironically, if they’d used an LLM, it would have corrected their writing.
Lmao
Sure, I definitely overreacted and I honestly was pretty stressed out the day I replied so yeah, fair. I think I have a point, this just wasn’t the salient place for it and I was too tired to realize that in the moment.
Objectively incorrect according to, who exactly?
Not everyone’s a native speaker.
Bro, it’s Twitter
And that excuses it I guess.
That would be correct, yes.
Twitter: Where wrongness gathers and imagines itself to be right.
Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.
Yeah there are some openly available datasets on competition sites like Kaggle, and some medical data is available through public institutions like like NIH.
I knew about kaggle, but not about NIH. Thanks for the hint!
Yeah there is. A bloke I know did exactly that with brain scans for his masters.
Would you mind asking your friend, so you can provide the source?
https://adni.loni.usc.edu/ here ya go
Edit: European DTI Study on Dementia too, he said it’s easier to get data from there
Lovely, thank you very much, kind stranger!
5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.
what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl
My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.
But thanks for assuming without asking first what the context was.
find possibilities how ML can support people with certain tasks
Marxism-Leninism?
Oh, Machine Learning.
Science is not political
in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.
Typical hexbear reply lol
Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.
that’s just going through life with blinders on
Typical hexbear reply
Unfortunately, you are right
Yes, typically hexbear replies are right.
It’s not unfortunate though, it’s simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It’s too bad that you seem not to want that understanding or that you lack the willingness to accept it.
My science is not. I like my bubble.
How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, “your” science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don’t have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.
My intended use case is to find possibilities how ML can support people with certain tasks.
weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative
This is no reason to stop science entirely
if the only thing your tech can be used for is bad then you’re bad for innovating that tech
Removed by mod
no u
Ok
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
No, I didn’t think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?
I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.
I just criticized your behavior assuming things, before you know the whole truth.
I didn’t assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you’ll find neutrality nowhere within them.
Removed by mod
Removed by mod
These shitlib whiners don’t care and my comments have been removed for the horror of incivility towards dr von braun
“Removed by mod” suck my nuts you fascist fucks lol
Where is the meme?
Well in Turkish, meme beans boob/breast.
The ai we got is the meme
AI should be used for this, yes, however advertisement is more profitable.
It’s worse than that.
This is a different type of AI that doesn’t have as many consumer facing qualities.
The ones that are being pushed now are the first types of AI to have an actually discernable consumer facing attribute or behavior, and so they’re being pushed because no one wants to miss the boat.
They’re not more profitable or better or actually doing anything anyone wants for the most part, they’re just being used where they can fit it in.
This type of segmentation is of declining practical value. Modern AI implementations are usually hybrids of several categories of constructed intelligence.