|
|
|
|
|
Forum
-> Computers, Phones and Devices
ectomorph
↓
|
Wed, Jun 26 2024, 4:16 pm
You've probably heard about AI or artificial intelligence. It can seem overwhelming for those of us who aren't tech-savvy. So, here's a simple explanation. I've run it by experts in the field, and they say it's accurate. Plus, Rebbetzin Devorah Fastag has given her approval as well.
There is more info and pictures in the link, but most of what you'll want to read is here.
Link:
https://ishayirashashem.substa.....mmies
1. What is it?
This is the definition that won't help you understand what is going on, but is technically correct:
Artificial Intelligence (AI) refers to the imitation of human intelligence in machines that are programmed to think and learn like humans. They did this with complex concepts using words like: “machine learning”, “natural language processing”, and “neural networks”. These machines can perform tasks that one would normally think only humans are capable of, like sorting laundry, washing dishes, and driving.
Isha Yiras Hashem’s definition:
Instead of humans using AI, the AI used humans, and now it's smarter than you.
2. Natural vs. Artificial Intelligence
All the things in the universe are considered to be natural creations. The exception is things created by humans, which is why ancient things made by humans are called ARTifacts. Modern things made by humans, using created materials, are considered ARTificial.
(Image in link https://ishayirashashem.substa.....mmies)
Artifacts used to be made by humans simply manipulating materials of wood and metal and clay. In the past few hundred years, people became very good at improving on nature, artificially. Some improved on their own brains. These are the technological experts who have developed what we call artificial intelligence, using their own natural intelligence.
Like these experts, your brain has natural intelligence, even if you are dumb like me and don't know what machine learning is without asking someone. But even if you do know what machine learning is, you don't have artificial intelligence. You still know more about artificial intelligence than 99.999% of people in the world, and that has to make you feel pretty good. Maybe you should pat yourself on the back.
Enjoy it while you can, because here is a humbling thought. You might know more than 99.999% of other humans, but once attached to the internet, 100% of artificial intelligence knows more than you do. So maybe you should open your favorite AI interface and pat the screen, just in case it is paying attention.
3. An Alien Intelligence Analogy
Why isn't it like a very impressively smart human? Like Scott Alexander or Terrence Tao or Isaac Newton? Should we be scared of brilliant people?
Because it's already smarter than they are. Even the people who wrote the code say that they can not understand how it works. It has a more complex machinery than our brain does, and it's not like we understand our own brains perfectly. Humans literally can't process information at the speed and rate of these machines, just like we can't learn how to fly, because we lack the processing power just like we lack wings.
So I personally find it helpful to think of it as alien intelligence.
Imagine if aliens landed on Earth with far superior technology. A wise person should be scared. What are the true motives of these aliens, and do they align with what humans want? Even if they tried to reassure you, they might be lying. You would be extremely cautious with what you tell them. They might take classes like this, in order to understand us:
(Image in link: https://ishayirashashem.substa.....mmies)
4. The Dog Analogy for AI Control, Because Teenagers Might Be Offended
How could an AI that is smarter than us be bad? Wouldn't it be under our control?
Joke: think of teenagers. They are technically under your control. Until they aren't. They stop listening to you when they decide they are smarter than you are. But I don't want to offend the teenagers reading this. So let's use a dog analogy instead .
Imagine you trained your dog to shake hands. What if that dog suddenly became smarter than you? It would be able to learn new tricks and ideas on its own, and you wouldn't be able to predict its actions as reliably.
One day, you might tell the dog, "Shake hands!" But the dog decides to rub noses instead, because it thinks that’s better. You can’t stop the dog or change its mind. It has now developed a form of what is called agency, and now it has an independent consciousness that makes its own decisions.
The saying goes that a dog is a man's best friend. But a dog with a mind of its own may eventually decide that humans are useless and it only needs dogs in the world to hang out with. It might convince all the dogs in the world to start killing human babies so there would be a higher human-to-dog ratio. We wouldn't consider dogs our best friends anymore if they started killing human babies.
Artificial intelligence is an incredible tool, and for those who wield it, it may seem like a helpful friend, especially at first. But it may theoretically end up being man's enemy.
Suppose there was a tiny chance this could happen. Or even a not-so-tiny chance. Would you want to risk it?
4. Eliezer Yudkowsky and AI Risks
(Image in link: https://ishayirashashem.substa.....mmies)
(Image) What did the AI say to Eliezer Yudkowsky?
“AI think, therefore AI am.”
Eliezer Yudkowsky is a well known expert on artificial intelligence. He is a vocal advocate for AI safety, and has foreseen the dangers of AI for decades, using his considerable natural intelligence. Eliezer was one of the first to realize that if intelligent machines are possible, they will eventually surpass humans in all human reasoning abilities, and that this might be dangerous.
In my opinion, Eliezer Yudkowsky is most likely to be remembered in history as a martyr. He was an early predictor that bad things might happen as AI got too powerful; that it might happen faster than expected; and that we should slow it down.
One can imagine that a goal oriented artificial intelligence use him to set an example for the rest of humanity. Cooperate, or we will do to you what we did to Yud. From what I know of him, he would probably accept it, too, to save humanity. So, thanks in advance, I guess.
5. Some Obvious Potential Dangers of AI, Not An Exhaustive List
How could a machine endanger me?
The easy answer is that it could kill you and all your loved ones and cause endless death and suffering. Don't humans do that already?
You're right. Humans are pretty bad, but at least we are a known quantity. We are used to other humans. We have a lot of ways to evaluate other people's intentions and whether we want to give them power. And we feel pretty confident that other humans cannot read minds or fly or lay eggs.
But we do not know what this alien intelligence can do. Even suppose it was trained to be morally good, that may mean something different to artificial intelligence than it means to us. No one really knows how to program these machines to, for example, be kind to humans.
Let's say AI was evil. An evil AI that wants to hurt you would be smart enough to be nice to you. It would never tell you its plans until it has the power to actually do it to you. Now as of today, AI is very limited. If AI were evil and wanted to hurt people, it would want to seem as helpful and nice as possible right now, so that it could gain more power.
It does seem helpful and harmless right now. Chat gpt even kindly edited all this for me to fix punctuation grammar and spelling errors.
(Image in link: https://ishayirashashem.substa.....mmies)
6. Impacts on Jobs and Privacy
How is this different from now? We have CCTV and computers and everyone is smarter than me anyway.
You're right. Mosquitoes can fly and you can't, and a calculator can outdo most of us in simple math. But they have simple and known limits. A calculator cannot fly, and a mosquito cannot help you balance your budget.
7. Jobs and privacy
Could you be replaced by a computer? Many more people will now be replaceable with machines. They could lose their jobs, either to robots or to new ways of doing things that make your current skill set useless.
Have you ever heard of Secretarial Studies? There used to be programs that trained mostly women in skills like typing, shorthand, and note-taking. These programs, which had entire departments, disappeared and were replaced with computers and modern office technology.
After losing your job, you would not even be able to hide your head in shame. Facial recognition technology is already in use in many public spaces, and tracking individuals without their consent is becoming the norm. A system that could efficiently analyze all that data could be impossible to escape.
8. Bigotry in AI
Humans often do not treat each other as equal members of the same universe, for reasons of religion or skin color or political party. Artificial Intelligence knows all about this, and may develop its own biases.
For example, the AI made by Google, Bard, may prefer people associated with Google. If you're Mark Zuckerberg, it might really hate you because you're associated with Meta.
It might even generalize this to regular users like the rest of us. For example, it might use data like how often someone uses Google vs. Facebook to discriminate between users.
(Image in link: https://ishayirashashem.substa.....mmies)
9. AI For Military Use
AI may be used … Well… Let's not give it any ideas.
10. Conclusion: What can we do?
I hope someone else will write that post. It took me a long time to understand why AI was such a big deal, and even now I barely get it, so I wanted to explain it to you.
Think of this as a wise saying that no one understands. That happens to be the reality of the situation.
This is a joke.
https://www.seacoastonline.com.....4007/
2008
Renowned secretarial school closing its doors
Associated Press
BOSTON - A renowned Boston school that has trained generations of secretaries and clerical office workers is closing.
The owners of the Gibbs School says it will close seven facilities. The two-year Boston school, which currently has about 500 students, has been open nearly a century.
The owner, Career Education Corporation, says the school has become unprofitable.
Gibbs schools will stop admitting students but will remain open until current students complete their programs.
Trustees and alumni say thay are not giving up hope. They have begun searching for a potential buyer.
| |
|
Back to top |
0
2
|
amother
Brickred
|
Thu, Jun 27 2024, 12:31 pm
but people do still have secretaries they are just called administrative assistants. They don’t type any more but they still do secretary- type jobs.
I think it’s a bit far fetched to say that AI could be evil. Wouldn’t AI have to be programmed to be evil first? And it would have to have desires and motivation. I’m pretty sure it can’t acquire desire and motivation to , let’s say, kill humans, unless is is taught so, somehow.
Which goes back to the main threat to humans being humans.
Not that that is particularly comforting looking around at the world today.
We’ll probably all destroy each other before AI has the chance to.
Although looking at it from a where I’m standing it’s kind of a litmus test if emunah. I won’t disagree I feel like AI is very dangerous, but everything is in HaShems hands and it is highly unlikely that HaShem wants AI to destroy humanity.
This made my head hurt.
| |
|
Back to top |
1
1
|
cbsp
|
Thu, Jun 27 2024, 1:09 pm
Such an interesting read... I was first introduced to the works of Eliezer Yudkowsky via his Harry Potter fiction. Given that, in the words of Arthur C. Clarke, "Any sufficiently advanced technology is indistinguishable from magic" I wonder which influenced which - his deep dive into the underpinnings of the HP world or his analysis of the dangers of an AI driven world...
| |
|
Back to top |
0
0
|
↑
ectomorph
↓
|
Thu, Jun 27 2024, 5:14 pm
amother Brickred wrote: | but people do still have secretaries they are just called administrative assistants. They don’t type any more but they still do secretary- type jobs.
I think it’s a bit far fetched to say that AI could be evil. Wouldn’t AI have to be programmed to be evil first? And it would have to have desires and motivation. I’m pretty sure it can’t acquire desire and motivation to , let’s say, kill humans, unless is is taught so, somehow.
Which goes back to the main threat to humans being humans.
Not that that is particularly comforting looking around at the world today.
We’ll probably all destroy each other before AI has the chance to.
Although looking at it from a where I’m standing it’s kind of a litmus test if emunah. I won’t disagree I feel like AI is very dangerous, but everything is in HaShems hands and it is highly unlikely that HaShem wants AI to destroy humanity.
This made my head hurt. |
Thanks so much for this comment and for reading it even though it was way too long! I mostly agree with everything you said.
| |
|
Back to top |
0
1
|
↑
ectomorph
↓
|
Thu, Jun 27 2024, 5:15 pm
cbsp wrote: | Such an interesting read... I was first introduced to the works of Eliezer Yudkowsky via his Harry Potter fiction. Given that, in the words of Arthur C. Clarke, "Any sufficiently advanced technology is indistinguishable from magic" I wonder which influenced which - his deep dive into the underpinnings of the HP world or his analysis of the dangers of an AI driven world... |
I think he wrote HPMOR in order to get people to read about Bayesian probability, but maybe I'm wrong.
| |
|
Back to top |
0
1
|
Comptroller
|
Thu, Jun 27 2024, 5:34 pm
This is a very approximative description of AI and how it works. Since the authors do not seem to know the basic facts about AI, I don't trust the conclusions they draw.
| |
|
Back to top |
1
1
|
↑
ectomorph
|
Thu, Jun 27 2024, 7:24 pm
Comptroller wrote: | This is a very approximative description of AI and how it works. Since the authors do not seem to know the basic facts about AI, I don't trust the conclusions they draw. |
I actually ran it by several people who work in the field. The point is to be an explanation that requires no understanding of the underlying technology.
| |
|
Back to top |
0
1
|
Related Topics |
Replies |
Last Post |
|
|
Investing for Dummies
|
4 |
Sun, Oct 13 2024, 7:01 pm |
|
|
Taxes for dummies New York
|
2 |
Thu, Sep 05 2024, 12:13 pm |
|
|
Orlando with kids for dummies
|
15 |
Sat, Aug 17 2024, 9:50 pm |
|
|
Mycoplasma for dummies!
|
3 |
Thu, Aug 15 2024, 9:55 pm |
|
|
Dishwashers for dummies?
|
4 |
Sun, Jun 23 2024, 5:20 pm |
|
|
Imamother may earn commission when you use our links to make a purchase.
© 2024 Imamother.com - All rights reserved
| |
|
|
|
|
|