You didn’t even include the benefit of self-driving cars that tends to one-shot BlueSky kneejerk anti-AI types: it increases the accessibility of transportation for the blind by several orders of magnitude.
AI is not only the greatest current accessibility technology (eg for writing their beloved alt text), their beloved alt text is one of the best sources of AI training data.
Generally when people complain about AI, they're not complaining about hurricane models, but about LLMs. So this post strikes me as missing the mark on that front.
The issue here is really that the term AI is too broad. “AI” is really a broad class of things, many of which have little to do with each other. If you just say “large statistical optimization models” it doesn’t have the same ring to it, but it’s clear that a large statistical model for improved mechanical part design has little in common with a large statistical model for lying on the internet, besides some very basic notions of “uses statistics.”
AI is a term for wooing investors, not a technical term, but it pretends to be. And it’s really the AI companies fault for trying to conceptually bundle a bunch of fairly disparate technologies/applications into a single bundle.
The papers on how these things all work won’t go away if OpenAI collapse; we shouldn’t be reluctant to sociotechnically clean house; we can always write another algorithm.
Yeah, I was pretty skeptical of AI usage as many people do it, basically for the use cases you outlined above; the AI video, imagery, and audio slop.
So far, it works best for me as a companion to human expertise. I have a job where I sometimes need to figure out the total surface area of metal parts with complex geometries. Oftentimes this mean breaking the shape down into several basic shapes. I know (or can find) the formulas for calculating area for various shapes, and figure it out, but also I have an intuitive sense of of what kind of numbers are accurate. I can give AI the measurements I have and it'll spit out a number, and I have my human sense to know whether that number is basically correct or not.
But computing the area of complex geometries is efficiently and exhaustively solved - at least in geographical information systems. Why would anyone want a potentially unreliable AI for this?
You are absolutely correct. AI has the potential to do incredible things that could benefit our society immeasurably. The key word is "COULD." The problem is that against these possibilities we have a CERTAINTY; AI will bequeath to us a world of total surveillance that could enable totalitarian government in the wrong hands (like, say, our current administration.)
I know a successful (so far), early stage AI company that uses an avatar to interview people and record all voice and facial data to determine if the speaker is telling the truth (a kind of polygraph alternative). Its accuracy has grown to be pretty amazing. One side of its business is used to identify suicidal ideation and other mental issues, mainly in children. That's great! The other half of the business has applications for things like airport security, immigration and law enforcement . In the future, AI cameras will be able to identify you by your blink. Not so great.
The only hard science application of AI that bothers me (and you don't mention it in this article, because hey, negativity sucks) is the potential to use it to facilitate the elaboration of new bioweapons and/or chemical agents for nefarious reasons (bioterrorism by lone wolves or terror groups or biowarfare by rogue states). Then again, those same AIs could be used to combat said bioweapons via counter medical research, but that's one of my biggest clear fears about it going wrong, besides chatbots and deepfakes...
It seems like there are two types of AI; there is generative ai; things like large language models, chat bots, audio, video and image generators ect.
and tailor made, domain specific ai/machine learning tools; things like like Alpha Fold, Waymo, Translation Models, Weather Forecasting ect.
Generative AI seems to operate on the premise that if they get enough compute, and enough data, eventually the infinite anti social probabilistic slop machine will turn into digital god that will solve all our problems.
While these domain specific implementations seems to be some combination of classical deterministic programming combined with probabilistic machine learning focused on solving for a particular problem.
I think people are pretty rightfully hostile/skeptical of the former, and far more accepting of the later.
Indeed they are literally the same thing. Basically the difference is a single technique (instruct tuning) that gets it to answer any question about text instead of just translating it.
This perspective is a vital reminder that AI’s transformative potential goes far beyond chatbots and viral content, its real-world applications in healthcare, safety, and sustainability are already saving lives and reshaping industries.
I talk about latest AI trends and insights. Do check out my Substack, I am sure you’ll find it very relevant and relatable.
Thank you for the positive examples, I despise how much AI is infesting every single app on my phone right now. I am still skeptical of it ever making enough to be worth the investments that have poured into it
It is good to see some positives that make me question my view of it being just another "metaverse" that tech companies were obsessed with trying to sell to the userbase
These are the versions of AI that companies will pay for, certainly. Who is selling these versions? Seems like OpenAI and xAI are on the chat/image side but I don’t fully know the players.
continuously fascinating to me the way public engagement with “technology” and “medium” has been overhauled in the last 12 months so that any sort of technocratically borne advancement in imperial life becomes evil and bad. like sure none of this is possible without evil computing - many such cases.
Now this version of AI is real human (haha) advancement. Thanks for sharing so that we can continue to hate what is really social media AI slop. Well and a lot of people are using ChatGPT etc too much? But am I ?
You didn’t even include the benefit of self-driving cars that tends to one-shot BlueSky kneejerk anti-AI types: it increases the accessibility of transportation for the blind by several orders of magnitude.
AI is not only the greatest current accessibility technology (eg for writing their beloved alt text), their beloved alt text is one of the best sources of AI training data.
Generally when people complain about AI, they're not complaining about hurricane models, but about LLMs. So this post strikes me as missing the mark on that front.
I mean yes, this is the entire point of the post and is explicitly laid out in the text?
The issue here is really that the term AI is too broad. “AI” is really a broad class of things, many of which have little to do with each other. If you just say “large statistical optimization models” it doesn’t have the same ring to it, but it’s clear that a large statistical model for improved mechanical part design has little in common with a large statistical model for lying on the internet, besides some very basic notions of “uses statistics.”
AI is a term for wooing investors, not a technical term, but it pretends to be. And it’s really the AI companies fault for trying to conceptually bundle a bunch of fairly disparate technologies/applications into a single bundle.
The papers on how these things all work won’t go away if OpenAI collapse; we shouldn’t be reluctant to sociotechnically clean house; we can always write another algorithm.
Yeah, I was pretty skeptical of AI usage as many people do it, basically for the use cases you outlined above; the AI video, imagery, and audio slop.
So far, it works best for me as a companion to human expertise. I have a job where I sometimes need to figure out the total surface area of metal parts with complex geometries. Oftentimes this mean breaking the shape down into several basic shapes. I know (or can find) the formulas for calculating area for various shapes, and figure it out, but also I have an intuitive sense of of what kind of numbers are accurate. I can give AI the measurements I have and it'll spit out a number, and I have my human sense to know whether that number is basically correct or not.
But computing the area of complex geometries is efficiently and exhaustively solved - at least in geographical information systems. Why would anyone want a potentially unreliable AI for this?
Oh, I think you may have physical parts. Then ignore my previous comment.
You are absolutely correct. AI has the potential to do incredible things that could benefit our society immeasurably. The key word is "COULD." The problem is that against these possibilities we have a CERTAINTY; AI will bequeath to us a world of total surveillance that could enable totalitarian government in the wrong hands (like, say, our current administration.)
I know a successful (so far), early stage AI company that uses an avatar to interview people and record all voice and facial data to determine if the speaker is telling the truth (a kind of polygraph alternative). Its accuracy has grown to be pretty amazing. One side of its business is used to identify suicidal ideation and other mental issues, mainly in children. That's great! The other half of the business has applications for things like airport security, immigration and law enforcement . In the future, AI cameras will be able to identify you by your blink. Not so great.
The only hard science application of AI that bothers me (and you don't mention it in this article, because hey, negativity sucks) is the potential to use it to facilitate the elaboration of new bioweapons and/or chemical agents for nefarious reasons (bioterrorism by lone wolves or terror groups or biowarfare by rogue states). Then again, those same AIs could be used to combat said bioweapons via counter medical research, but that's one of my biggest clear fears about it going wrong, besides chatbots and deepfakes...
It seems like there are two types of AI; there is generative ai; things like large language models, chat bots, audio, video and image generators ect.
and tailor made, domain specific ai/machine learning tools; things like like Alpha Fold, Waymo, Translation Models, Weather Forecasting ect.
Generative AI seems to operate on the premise that if they get enough compute, and enough data, eventually the infinite anti social probabilistic slop machine will turn into digital god that will solve all our problems.
While these domain specific implementations seems to be some combination of classical deterministic programming combined with probabilistic machine learning focused on solving for a particular problem.
I think people are pretty rightfully hostile/skeptical of the former, and far more accepting of the later.
The line between translation ai and generative ai is pretty blurry
Indeed they are literally the same thing. Basically the difference is a single technique (instruct tuning) that gets it to answer any question about text instead of just translating it.
This perspective is a vital reminder that AI’s transformative potential goes far beyond chatbots and viral content, its real-world applications in healthcare, safety, and sustainability are already saving lives and reshaping industries.
I talk about latest AI trends and insights. Do check out my Substack, I am sure you’ll find it very relevant and relatable.
Thank you for the positive examples, I despise how much AI is infesting every single app on my phone right now. I am still skeptical of it ever making enough to be worth the investments that have poured into it
It is good to see some positives that make me question my view of it being just another "metaverse" that tech companies were obsessed with trying to sell to the userbase
I hadn’t heard about the pesticides—that’s awesome!
These are the versions of AI that companies will pay for, certainly. Who is selling these versions? Seems like OpenAI and xAI are on the chat/image side but I don’t fully know the players.
Since LLMs are a general technology there isn't a lot of room for different players since there's no need for specific technologies.
But Anthropic has the best programming models and Google has the best cost performance.
continuously fascinating to me the way public engagement with “technology” and “medium” has been overhauled in the last 12 months so that any sort of technocratically borne advancement in imperial life becomes evil and bad. like sure none of this is possible without evil computing - many such cases.
Now this version of AI is real human (haha) advancement. Thanks for sharing so that we can continue to hate what is really social media AI slop. Well and a lot of people are using ChatGPT etc too much? But am I ?