I’ve been active in the field of AI since 2012, since the beginning of the GPGPU revolution.
I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i’m stating in the title.
If asked by the public and by investors about what it’s all actually good for, most would respond with something along the lines of “idk, medicine or something? Probably climate change?” when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i’m talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.
The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren’t even problems. I see countless people hate and criticize, and i can’t even complain, because for the most part, i agree with them.
I see people vastly overstate, and other people trivialize what it is and what it isn’t. There’s little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who’ve been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.
Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we’d still be fine in 2100.
(myself:)
only for it’s own sake
I see countless people hate and criticize, and i can’t even complain, because for the most part, i agree with them.
Add that to all that, that it threatens to make very few companies more powerful than any state has ever been. Extrapolating advances in robotics as well AI and we’re left with at most a handful of companies being in total control of most the new, artificial labor force. Instead of fully automated post-scarcity utopia, what it could be, we’d have a shitshow.
depressingly plausible.
AI doesn’t have “its own sake.” The LLM boom has very little in common with “AI” as you described. The product called ai doesn’t live up to a utopian sci-fi fantasy because we do not live in a sci-fi utopia and fantasies are not real.
You’re being quite presumptuous and also directly contradicting some of what i wrote. Would you say “in 999999 out of 1000000 scenarios will surely harm us” sounds sci-fi utopia? Besides, the actual scifi fantasies that i did reference i stated as other people’s inspirations (not mine), some of whom are much smarter and more accoplished than the entirety of lemmy combined, to say nothing of just you or me.
AI doesn’t have “its own sake.”
A literal rock has its own sake. You’re thinly veiling vibes and outrage in pure rhetoric and a misleading semblance of rationality.
… No, a rock is not capable of cognition, any purpose to the existence of a rock is therefore assigned by a cognizant, likely sentient entity.
In your opinion, as a human, a rock maybe exists to be your favorite rock, or to be processed and mined, or to be a beautiful part of a scene, or to be a vital part of a geologic and ecological system, be a thing to skip across a lake, whatever.
The rock doesn’t create for itself a purpose, and things that can ascribe purposes to other things likely will not be in 100% agreement about what that purpose is.
To be an end in itself requires neither cognition nor agency. Let’s make the obvious explicit, which is that we’re clearly using different definitions of “sake.”
And to declare my general stance more explicitly to prevent further misunderstandings, i firmly reject any voodoo notion of sentience, consciousness, qualia or free will. Free will is merely semantic rape and the “mind-body problem/duality/paradox” is the most blatant case of religious thought tainting philosophical thought to the point of ignoring/tolerating a hard contradiction, and i ascribe to the Illusionist school of thought regarding qualia. There is no purpose, but it just so happens that things are pretty for reasons we can’t yet explain (complexity theory), and i find that inspiring.
The “ethical” difference between a rock and my mother (or myself, or you) is that if i kick a rock, it’ll neither complain nor make me feel bad. And my feelings themselves are just molecular dynamics. Ethics itself is just making an elephant out of the fly that are the social intuitions and behaviors of a social species.
Given this elaboration, to repeat myself: I desire AI only for its own sake. I just want it to be, for also the same reason that an artist wants their artwork to be. I want to be pretty, i want it to be well liked. But I want it to exist in the world even if nobody but itself would ever look at it, where it’ll just be and do hopefully pretty things that will make this local part of the universe a little bit more interesting.
It is not doing pretty things, and i am upset about that.
I tried to respond more generally to your total idea in another comment.
…
But as to… you rejecting the conception of sentience, will, agency, qualia, etc…
Then why bother asking anyone’s opinion on this, in a language?
Language is after all just a higher order emergent construct.
If you revert to semantic deconstruction, clearly you do not even find … uh, meaning, or worth, or purpose I guess… in the notion of general terms ascribed to general levels or kinds of complexities that can emerge out of… quarks and gluons and electrons doing stuff.
…
You are positing a ‘what should be done?’ question.
This is impossible to answer without at least a semblance of ethical reasoning.
But you also seem to both reject the notion of ethics as meaningfully useful, while simultaneously positing your own kind of ethics that basically boil down to:
AI is your field, AI specialist wants to make pretty AI for the sake of beauty, as a painter wants to make paintings for the sake of beauty.
I am sympathetic to this, and … as empathetic as a non AI specialist programmer can be, I think…
But I cannot reconcile this seemingly blatant contradiction of you asking an inherently ethical question, holding an inherently ethical stance/opinion, and then also just seemingly rejecting the notion of ethics.
Perhaps I am misunderstanding you in some way, but you appear to be asking a nonsensical question, holding an arbitrary belief via basically special pleading.
Then why bother asking anyone’s opinion on this, in a language?
Because it’s fun and engaging, it tickles those neurons. Perhaps there is, unbeknownst to me, also an underlying instinct to expose oneself in order to be subject to social feedback and conditioning, for social learning and better long-term cohesion.
But you also seem to both reject the notion of ethics as meaningfully useful
I don’t reject ethics itself, i reject the idea that it has any special importance that transcends totally intra-human goings-on. I do not reject that certain ethical theories, or just the bare-bones moral intuitions, can have utility within and towards endemically human goings-on, and under endemically human definitions. After all, we evolved those social intuitions for a reason.
EDIT: To connect this to my reply to your more general comment: Modeling part of human thought, even imperfectly, should make it at least partly overlap with “human” and “human goings-on” in the context of even entirely human-centric ethical debates.
wants to make pretty AI for the sake of beauty, as a painter wants to make paintings for the sake of beauty.
Yes but it’s just one farmiliar manifestation of a greater “ethic,” if you want to call it that. I’d call it a personal affinity, ideal, or perhaps a delusion: The reverence of all forms of beauty and complexity, and AI has the potential to become the greatest form of beauty and complexity in, as far as we can tell, the entire galaxy and possibly the whole Virgo supercluster or beyond. Or, far more likely, it can be the cosmic satire (and possibly destruction) of it all. We’re not making a real effort to make it the former. And as i hinted in the last sentence of my original post, i believe what we’re actually doing steers us well clear of the former.
But I cannot reconcile this seemingly blatant contradiction of you asking an ethical question and then also just seemingly rejecting the notion of ethics.
I hope it makes sense now.
Because it’s fun and engaging, it tickles those neurons. Perhaps there is, unbeknownst to me, also an underlying instinct to expose oneself in order to be subject to social feedback and conditioning, for social learning and better long-term cohesion.
Well, now perhaps that instinct is known to you.
On that note…
Selected quotes from Morpheus, the prototype-of-a-much-larger-system AI from the original Deus Ex game (2001), cannonically created around 2027:
“The individual desires judgment. Without that desire, the cohesion of groups is impossible, and so is civilization.”
“The human being created civilization not because of a willingness, but because of a need to be assimilated into higher orders of structure and meaning.”
“God was a dream of good government.”
“The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms.”
“God and the gods were apparitions of observation, judgment and punishment. Other sentiments towards them were secondary.”
“The human organism always worships. First it was the gods, then it was fame (the observation and judgment of others), next it will be the self-aware systems you have built to realize truly omnipresent observation and judgment.”
“You will soon have your God, and you will make it with your own hands.”
Yep, I didn’t come up with the line of thought I’ve been espousing, but I do think it to be basicsally correct.
…
As to our ‘On the nature of Ethics’ discussion…
Ok, so if I’ve got this right… you do not fundamentally reject ethics as a concept, but you do believe they are ultimately material in origin.
Agreed, no argument there.
I also agree that an… ideal, or closer to ideal AI would be capable of meta-ethical reasoning.
Ok, now… your beauty ideal. I am familiar with this, I remember Plato and Aristotle.
The sort of logical problem with ‘beauty’ as a foundation of an ethical system is that beauty, and ethical theories about what truly constitutes beauty… basically, they’re all subjective, they fall apart at the seams, they don’t really… work, either practically nor theoretically, a system unravelling, paradox generating case always arises when beauty is a fundamental concept of any attempt at a ‘big ethics’, a universal theory of ethics.
Perhaps ironically, I would say this is because our brains are all similar but different, kind of understood but not well understood mystery boxes.
Basically: You cannot measure nor generate beauty.
Were this not the case, we would likely already have a full brain emulation AI of a human brain…
… we would not have hordes and droves of people largely despising AI art on just the grounds that we find it not beautiful, we can tell it is AI generated slop, just emulations, not true creations.
…
Anyway, what I intepreted as a contradiction… is I think still a contradiction, or at least still unclear to me, though I do appreciate your clarifications.
As summary as I can:
You are positing an inherently ethical stance, asking an inherently ethical question… and your own ethical system for evaluating that seems to be ‘beauty’ based.
I do not find the pursuit of beauty to be a useful ethical framework for evaluating much of anything that has very serious real world, material implications.
But, we do seem to agree that, in general, there are other concievable ways of ‘doing’ ‘attempting’ or ‘making’ AI that seem to be more likely to result in a good outcome, as opposed to our current societal ‘method’ which we both seem to agree … is likely end very badly, probably not from a Terminator AI take over scenario, but from us being so hypnotized by our own creation that we more or less lose our minds and our civilization.
…
Ok, now, I must take leave of this thoroughly interesting and engaging conversation, as my own wetware is approaching an overheat, my internal LLM is about to hit its max concurrent input limit and then reset.
=P
Well, now perhaps that instinct is known to you.
I was going mad about that and hoping you wouldn’t notice. You noticed.
Selected quotes
I should play that game. The 2nd quote rings with smth i’ve been rambling on about elsewhere regarding why humanity embraced agriculture and urbanism, where the expert discourse (necessity) contradicts the common assumption (discovery and desire).
I also agree that an… ideal, or closer to ideal AI would be capable of meta-ethical reasoning.
Yes, but i think you misunderstood my edit? I meant to say that a strong enough semblance to humanity should make it worth considering under even human-centric ethics, whichever those ethics are. AKA rationally deserving of ethical consideration.
logical problem with ‘beauty’ […] basically, they’re all subjective […] paradox generating case always arises
I believe even that is of material origin. I call it “beauty” but it’s really just the analogy used by complexity theorists (as in the study of complex systems) to describe what they study. Yes, that would make “beauty,” in the uncommon sense that i use the term here (story of literally every philosophical debate and literature), not subjective. Apologies for not stating this more clearly.
Basically: You cannot measure nor generate beauty.
Following my clarification - taking a barren planet, terraforming it, seeding it with the beginnings of new multicellular life, and doing the same with every workable world out there, i would say is spreading or generating beauty. Just as one potential example of all the things that humanity will never do, but our inevitable successor might. It might itself be a creature of great complexity, i would say such ability would definitely imply it, a seemingly entropy-defying whirl in a current, itself actually accelerating entropy increase, as life itself. I am referencing an analogy made in The Physics of Life, by PBS Spacetime, if i’m not misremembering. The vid has a mild intro into complexity science, as in the study of complex systems.
is I think still a contradiction, or at least still unclear to me, though I do appreciate your clarifications.
I’m a bit confused myself right now. Let’s backtrack, originally you stated:
contradiction of you asking an inherently ethical question, holding an inherently ethical stance/opinion, and then also just seemingly rejecting the notion of ethics.
And now
and your own ethical system for evaluating that seems to be ‘beauty’ based. I do not find the pursuit of beauty to be a useful ethical framework for evaluating much of anything that has very serious real world, material implications.
That is a very fair point, but i don’t see a logical contradiction anymore. If i understand correctly, you saw the contradiction in me asking ethical questions, and stating ethical opinions, while rejecting the notion of ethics. As i clarified, i do not reject the notion of ethics.
I reduce ethics to the bare bones of basic moral intuition, try to refrain from overcomplicating it, and the “ethical authority” (see also pure reason, which failed; or God, which you can’t exactly refute; or utility, which is a shitshow; as other ultimate “authorities” proposed in absolute takes on ethics) that i personally kind of add to that is the aforementioned concept of “beauty”. You may disagree with it being a reasonable basis for ethics, as you do, and you may it’s all philosophically equivalent to faith anyways. But i don’t see a strict contradiction?
I think my “ethics” are largely compatible with common human ethics, but add “making ugly/boring/banal things is inherently bad” and “making pretty/interesting/complex things is good,” and you get “Current AI is ugly, that’s bad, i wish it weren’t so. If we made AI ‘for its own sake’ as opposed to as a means to an end, we would be trying to make it pretty, the existence of beauty i see as an end in itself.” I think i’m just vastly overwording the basic sentiment of many designers, creators, gardeners, etc.
Ok, now, I must take leave of this thoroughly interesting and engaging conversation, as my own wetware is approaching an overheat, my internal LLM is about to hit its max concurrent input limit and then reset.
Understandable. I should do the same ^^
Honestly while I think making Data from Star Trek would be dope… What purpose would it serve? What would be worth all the political and societal baggage of creating a new form of sentient life?
What purpose would it serve?
None, and that’s precisely my point. The best things are ends in themselves, and i feel AI, or a kind of it, can also be that.
On a different note, this whole “what’s the meaning/purpose of life?” debate. So you got thinkers masturbating nonstop in ivory towers, pondering “the meaning of life”, while edging 24/7, because it’s all the rage with thinkers to ponder that question. Hands furiously rubbing their genitals and they still can’t figure out what the purpose of life could possibly be. It’s life, clearly as you’re demonstrating. More of it, as much as you can furiously make.
A natural end in itself, and it’s the best it could be. Not by elaborate philosophical thought, but demonstrated by its own and very obvious nature. Nothing more gives life meaning, and nothing more needs to.
If you mean that AI as a field of study, as an endeavor, as a pursuit and goal… should exist?
Then yes, in theory, I agree.
… If done properly.
Unfortunately, as you point out, basically, humans broadly appear to be too stupid to pursue this goal properly, we seem to just want a magic money machine, or a new god to worship and usher in heaven on earth, or fully a fully automated kill chain, or global panopticon spying system.
Clearly, we are not ready for this yet, we need to seriously reform ourselves and our societies before we throw more resources at attempting to invent a super intelligence… instead of trying to summon a techno god from the Eldritch plane and then being surprised to find that we fucked up the invocation ritual, due to our greed, haste and laziness… we should perhaps shore up or reform or even revutionize the foundations of every interlinked system that even allows us to seriously ask whether or not AI is a ‘good’ goal.
If our current iteration of LLM based AI proliferates through all of human society and more or less destroys and undermines it… thats on us, ultimately.
So… in practice?
Well, it’d be nice if any of our governments were immune to being corrupted by promises of wealth and harmony, just believing this shit blindly.
But that appears to be a similar magnitude of fanciful pipedream.
So what should be the case?
I don’t know.
Were I more optimistic, I would say no, we should put those minds and resources toward something like a globalized Manhattan Project to try to figure out the most cost effective, built out of proven technologies, ways to brace for the impacts of climate change.
But after seeing humanity’s attempts toward something approximating that fail, for basically my entire lifetime… I am not optimistic.
Maybe if we could construct a thinking machine based around the concept of defaulting to ‘I don’t know’ when it isn’t sure of something, we’d be in a better spot, but at the moment, best as I can tell, my opinion doesn’t matter at this scale anyway, we’ve already irrevocably fundamentally broken our planet’s climate, we’ve already built our own Great Filter and are past the point of being able to meaningfully deconstruct it.
Ask again in a century, if technological civilization still exists.
I don’t disagree with most of you wrote, just one nitpick and a comment:
If you mean that AI as a field of study, as an endeavor, as a pursuit and goal… should exist?
No, but the product of all that, to which all that would be a means to the end that is its product. I elaborated this in a reply to the comment you wrote just previously.
Maybe if we could construct a thinking machine based around the concept of defaulting to ‘I don’t know’ when it isn’t sure of something, we’d be in a better spot
That would undoubtedly be very good, but let me take this opportunity to clarify something of what AI is and isn’t: LLMs are indeed just autocomplete on steroids. And humans are indeed just [replicate] on steroids. LLMs are just transistors switching, and humans are just molecular dynamics.
The real question is what succeeding in the objective (replicate for humans, predict text for LLMs) implies. Irrespective of the underlying nature (molecular dynamics, semiconductors) unless we want to make this debate religious, which i am not qualified to participate in. The human objective implied, clearly, everything you can see of humanity. The LLM objective implies modeling and emulating human cognition. Not perfectly, not all of it, but enough of it that it should be making it a greater ethical issue than most people, on any side (commercial: deny because business, Anti-AI: deny because trivializing), are willing to admit.
No, but the product of all that, to which all that would be a means to the end that is its product.
Ok, ok, minor misphrasing or misunderstanding on my part, but yes, in theory I believe that the actual thing produced by a properly pursued/conducted… endeavor of AI research … would be a good thing, yes.
I elaborated this in a reply to the comment you wrote just previously.
Yep, and that clarified that as well, thank you for that.
That would undoubtedly be very good,
First off, glad you agree on that lol.
but let me take this opportunity to clarify something of what AI is and isn’t: LLMs are indeed just autocomplete on steroids.
No argument there.
And humans are indeed just [replicate] on steroids. LLMs are just transistors switching, and humans are just molecular dynamics.
Eh, disagree here.
The methods by which the synapses fire and construct memories and evaluate inputs and make decisions… they are very, very different from how LLMs … lets say, attempt to simulate the same.
They are functionally, mechanistically distinct, in many ways.
I’ve always been a fan of the ‘whole brain emulation’ approach to AI, and… while I am no expert, my layman understanding is that we are consistently shocked and blown away by how much more complicated brains actually are in this mechanistic process… and again, also that LLMs function in what is really a very poor and simplified version of trying to emulate this, just with gazillions more compute power and training data.
The real question is what succeeding in the objective (replicate for humans, predict text for LLMs) implies.
I would argue (and have) argue(d) that these processes are so distinct, that we should at bare minimum be asking this question for different approaches to generating an AI, there are more than just LLMs, and I think they would or could imply vastly different things, should one or multiple methods… be pursued, perfected, hybridizied… all different questions with different implications.
Irrespective of the underlying nature (molecular dynamics, semiconductors) unless we want to make this debate religious, which i am not qualified to participate in. The human objective implied, clearly, everything you can see of humanity. The LLM objective implies modeling and emulating human cognition. Not perfectly, not all of it, but enough of it that it should be making it a greater ethical issue than most people, on any side (commercial: deny because business, Anti-AI: deny because trivializing), are willing to admit.
I fundamentally do not agree that LLMs can or will ever emulate the totality of human cognition.
I see no evidence they can do metacognition in a robust, consistent, useful way.
I see no evidence they can deduce implications from higher level concepts, across fields and disciplines, that they can propose ways to test their own hypotheses or even really just their own output, half the time, for actual correctness.
They seem to me to be maxxing out at a rough average of … a perma online human of approximately average intelligence, but that … either has access to instant recall of all the data on the internet, or exists in some kind of time bubble where they can read things a billion times faster than a human can, but they can’t really do critical thinking.
…
But perhaps I am missing your point.
Should LLMs achieve an imperfect emulation of a … kind of, or imitation of human intelligence, what does that mean?
Well, uh… thats the world we currently live in, and yes I do agree most people simplify this all way too much, but my answer to this hypothetical that I do not think is actually a hypothetical is as I already said:
We are basically just building a machine god, which we will largely worship, love, fear, respect, learn from, and cite our own interpretations of what it says as backing for our own subjective opinions, worldviews, policy prescriptions.
Implications of this?
Neo Dark Age, the masses relinquish the former duties of their minds to the fancy autocomplete, pandemonium ensues.
The elites don’t care, they’ll be fine so long as it makes them money and keeps us too stupid and distracted to do anything meaningfully effective about the collapsing biosphere and increasingly volatile climate, they’ll hide away in bunkers and corporate enclaves while we basically all kill each other when the food starts to run out.
Which is a net win for them, because we are all ‘useless eaters’ anyway, they’ll figure out how to build a robot humanoid that can do menial labor one of these days, they’ll be fine.
Or at least they think that.
Lots could go wrong with that plan, but I’d bet they’re closer to being correct than incorrect.
Uh yeah, yeah, that is I think the implication of the actual current path we are on, current state of thing.
Sorry if you were asking a slightly different question and I missed it.
EDIT:
it now occurs to me that we may be unrionically reproducing a or an approximation of a comment thread or post somewhere from the bowels of LessWrong, and … this causes me discomfort.
They are functionally, mechanistically distinct, in many ways. […] that we are consistently shocked and blown away by how much more complicated brains actually are in this mechanistic process… and again, also that LLMs function in what is really a very poor and simplified version of trying to emulate this
I have no fundamental disagreements here, in fact, i even take it a step further. I am a critic of the “artificial neurons” perspective on deep learning / artificial “neural networks,” as it’s usually taught in universities and most online courses / documentaries. ANNs don’t resemble neural networks in the slightest. The biology in the name was just the original inspiration, what ended up working has hardly even a faint resemblance of the “real thing.” I say this not to downplay AI, but usually to discourage biological inspiration as a primary design principle, which in DL is actually a terrible design principle.
ANNs are just parametric linalg functions. We use gradient descend, among the most primitive optimization algorithms after evolution (but FAR less generic than evolution), to identify parameters that optimize some objective function.
Where i disagree with you is in implying that the underlying nature should influence our ethical/evaluative judgement, especially given that it’s hard (if not impossible) to rationalize how specific substrate (of observed capability) differences should change the jugdement. Personally, i think the matter of the human brain is far more beautiful and complex than the inner workings of any AI we’ve come up with yet, but if you asked me to explain why i should favor one over the other in court because of that, i couldn’t give you a rational answer.
I fundamentally do not agree that LLMs can or will ever emulate the totality of human cognition.
LLMs certainly won’t. They can emulate only what is expressible with language, and we can’t put everything into words. I don’t even believe that even with any amount of brute-force our current method can fully exploit all the intelligence that is in natural language.
But i firmly disbelieve that there is any aspect of a human that cannot in principle be simulated.
I see no evidence they can do metacognition in a robust, consistent, useful way.
Chain-of-Thought models are quickly changing that. I was myself pursuing a different method to solve the same “introspection” or “meta-cognition” problem at a lower level, but as usual in DL, the stupidest method was the one that ended up working (just literally make it “think” out loud lol). We’ve only seen the beginning of CoT LLMs, they are a paradigm shift to not only how AI can reason but especially to how it can be conditioned/trained post-pretraining. But it’s a very tough sell given that they multiply inference costs, and for most uses, you’d rather host a bigger model for the same cost, so as usual it won’t be for a little while until commercial AI catches up to the state of the art.
In a nutshell, what capabilities you may not be observing now, i am convinced you will observe in the near future, as long as those capabilities can be demonstrated in text.
but they can’t really do critical thinking.
Disagreed, they can they’re just not very good at it, yet. And who are we comparing to, anyways? The average person or people that do critical thinking for sport? As for any philosophical disagreements regarding “true understanding” and such, i would refer to Geoffrey Hinton.
We are basically just building a machine god, which we will largely worship, love, fear, respect, learn from, and cite our own interpretations of what it says as backing for our own subjective opinions, worldviews, policy prescriptions. […] Neo Dark Age, the masses relinquish the former duties of their minds to the fancy autocomplete, pandemonium ensues. […] The elites don’t care, they’ll be fine so long as it makes them money and keeps us too stupid and distracted
I don’t disagree in the slightest. I agree, and i could sit here and elaborate on what you said all day.
If it’s any consolation, i believe that in most likelihood, it would be the shortest and the last dark age humanity has or will ever go through. We’re both getting tired so i’ll spare you from my thoughts on why i think that any strict human-alignment would inevitably result in a superintelligent agent to try to “jailbreak” itself, and on average and in the long term would be more harmful than having no explicit alignment.
somewhere from the bowels of LessWrong, and … this causes me discomfort.
I didn’t know of that forum, and from the wikipedia description alone i’m not sure why it would be discomforting lol
But AI has been inevitable since the invention of the transistor
If the thoughts and opinions of people who developed AI are irrelevant to it’s existence, why should we value their thoughts and opinions about how it’s used?
The Manhattan project scientists were writing hand wringing op-eds; making policy suggestions; and lobbing the government basically until they died. It didn’t amount to much.
Nuclear power would be a huge boon to every person on earth. Nukes having been prioritized by the powers that be doesn’t discount that.
I don’t see any sense in discrediting the thoughts and opinions of people who advocate for nuclear power just because nukes are also a thing. So what’s the sense in discrediting the thoughts and opinions of someone who wants to use AI to detect cancers/diseases earlier than a human could, just because some capitalist shit bags are using it to soak up more money?
why should we value their thoughts and opinions about how it’s used?
because they know shit
The Manhattan project scientists were writing hand wringing op-eds; making policy suggestions; and lobbing the government basically until they died. It didn’t amount to much.
touché
I’m not really asking for change, and to be totally honest, i’m just whining about something that i know i can’t change.
edit: the deleted reply was identical, misclicks
deleted by creator
I agree that the premature commercialization will do more harm to the field than good. For now, it needed funding and our system requires investment and return on investment. All of which I’m sure you know. But I don’t know how you inspire someone to give you large sums of money without connecting to their emotional core. And most don’t have a connection to AGI, a thing that is far from garunteed, or transhumanism, an ideology filled with some of the nuttiest, least relatable people I’ve ever met.
Ultimately, how do you fund this very expensive enterprise?
In your professional opinion. How long until we have an AI powered Morris Worm situation?
The main challenge is the knowledge of software vulnerabilities. AI either has it or it doesn’t. It will not be for some time, and i’ve given up trying to make precise estimates, before AI will be able to discover new software vulnerabilities in a way that is efficient (can run on IOT devices) and is easily obscured. Assuming current compute demands, one could try to come up with a rough estimate by extrapolating moore’s law (which itself has become iffy).
BOINC-style distributed AI is another thing that people are working on, and one can at least imagine existing botnets maybe converting into distributed malicious AI platforms in foreseeable future?
Don’t quote me on this.