AI expert Dr. Gary Marcus joins CNN’s Fredricka Whitfield to discuss the dangers of artificial intelligence. #CNN #News
46 comments
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Recent Post
- Dominica: World Creole Music Festival Road Show in 2023
- Style experts spot a ‘no-no’ inside the Oval Office
- Amanpour: Dissent about war in Ukraine in Putin’s circle is ‘silenced’
- Cannes Film Festival 2023: The stand-out films and stars this year
- Anti-Putin Russians based in Ukraine allegedly stage incursion into Russia
Before you criticize someone, walk a mile in their shoes. That way when you criticize them, you’re a mile away and you also took their shoes.
Classic, 🤣😂👍
I gotta try to remember that one. 🤔 🧐 🤭 😆
Customer support is going to jump on this. Callers will talk for hours thinking they’re talking to a human.
@Lane Atkinson 😂that’s always fun to do.
Good. Most of the human representatives are arguably worse
I recently had a tech support ticket open with what appears to be either 1) a poor human forced to follow a very pre-determined script despite me stating unequivocally in the beginning I’ve tried all of those steps already; or 2) a very poor AI that just would only give canned responses, without consideration what I said. Not sure what’s worse, but certainly the conditions people work under in call centers aren’t good conditions.
@Holyshiszle I’m definitely not ok with it. I don’t want to speak to a robot, I want to speak to a human. I do not care HOW advanced or what kind of a job it does. It’s not for me.
I’m the guy that will go to the lane manned by a person rather than go to the self checkout. That crap is ruining us.
If it can be used for nefarious purposes it will be. Human nature.
The one constant of the universe….human greed and avarice!
Yes. But there’s nothing we can do
Always be vigilant, never weaken, hold on to hope are 3 things we can do. They do work.
It’s never been more important to consider the source.
When we stop outsourcing customer service to India then I’ll believe in ai
Boom!
You sure you didn’t just get scammed?
I asked ChatGPT for a haiku about the transience of beauty:
_Petals fall gently,_
_Crumbling hues lost to the wind._
_Fleeting beauty’s end._
Go touch some grass
@полночный светAnyone who uses AI is going to have more time touching grass than an actual poet.
@V01DIORE you actually just destroyed that guy with beautiful logic
This has been around for a long time, the public is the last to know.
I would like to have it as a tool for my own writing. I can give it the information I want to write about, and then see if it has any ideas I want to use. I wouldn’t use it instead of my own writing . I would just use it for extra ideas. It is just an experiment that I would like to try, to see if I could write faster.
Yeah so far that’s a really good way to use it. Good for brainstorming and experimenting.
@Unoseth Dohi If I understand your question correctly, you are asking whether there are problems that arise when AI is used to replace or augment human decision-making.
The answer is that both scenarios can potentially lead to problems.
If AI is used to replace or augment human decision-making, there are potential problems. For example, there may be ethical concerns around whether certain decisions should be left entirely to machines, and whether they can be held accountable for their actions. Additionally, AI may not be able to fully replicate human intuition, creativity, and empathy, which are important factors in many decision-making processes.
On the other hand, if people use AI, there are also several issues that can arise. For example, if people rely too heavily on AI and trust it completely, they may overlook important information or make decisions based solely on the AI’s output without fully understanding the reasoning behind it. Additionally, if AI is not properly designed or tested, it can produce biased or inaccurate results, which can lead to incorrect decisions or actions.
Therefore, it is important to consider the potential benefits and risks of using AI in any given scenario, and to design and implement AI systems in a responsible and ethical manner.
@rip_godzilla This sounds AI generated. The problem isn’t us implementing AI. The problem is AI implementing us.
@Jen Kem problem is is makes writing so easy it makes it easy for people with no writing talent to write talentless writing.
@Unoseth Dohi I’m assuming that you are asking about the potential implications of AI using human beings. In this scenario, it is important to note that AI does not have agency or the ability to make decisions on its own, but rather it is programmed by humans to perform specific tasks or functions.
However, there are certain concerns that arise when AI is used to monitor, track, or manipulate human behavior. For example, if AI is used to monitor employees in the workplace, there may be concerns around privacy, surveillance, and the potential for misuse of data. Similarly, if AI is used in social media or advertising to influence human behavior, there may be concerns around the ethics of such practices and the potential for unintended consequences.
It is important to recognize that AI is a tool that can be used for both positive and negative purposes, depending on how it is designed and implemented. Therefore, it is important for developers, policymakers, and society as a whole to carefully consider the potential implications of AI and to use it in ways that promote ethical and responsible behavior
It’s not a question of if bad actors will use A.I. to scam people. THEY WILL. In all areas of our life, anyway they can.
Bad Actors like the US Government?
laws and regulations will be in place
@David Heller People have to actually get caught committing a crime before they can be charged with one. How much damage will be done before they are caught?
@V01DIORE So, humanity needs to waste even more time, energy, and resources on this BS?
@Mza Mza Pretty much unless you want to just live with the new problems without solving them.
gpt4 is not a pastiche or mimicking a huge database, you can ask it to reverse order or alphabetize *anything.* To do that it needs to _formally control_ the shape of the output according to implicit higher order categories ,”pastiche” won’t achieve that over open ended input request. Also It was not taught any of this, it has emergent abstraction ability and. abstractions to some degree are formal reasoning which is the next step above concrete reasoning and the hallmark of thinking. GPT4 is borderline meta aware of some concepts just like humans but without the feelings that delude one into thinking they’re thinking…
I was born too late, take me back to the 80s
Why? This is the time to be alive
Oh my goodness before Stephen Hawking died and warned us about the AI thing😲
What Dr. Marcus describes sounds like a number of people I’ve seen existing in the world… a lot of info on hand, inadequate synthesis of that information, but good enough to convince others that it’s intelligent when many times it’s just making stuff up. AI could have a wonderful career ahead of itself I’d say. 😬💁🏾♂️
I think that we already at that point in AI where the internet is not trusted. I think we need like a net ID to be presented online for validation.
All videos will have to be run through a pre-filter before being uploaded, we have powerful software that can easily recognise deep fake content. It will still be a game of cat and mouse, we need to structure the laws correctly so we can filter out false content
Did AI engineers leave out the ethics variable from the algorithm by accident or on purpose?
Well.. from my understanding he said it doesn’t really know what it’s talking about. It’s just putting stuff together that may or may not be coherent. If that is the case I don’t think it would be possible to give it “ethics”. At least not until it was actually intelligent.
Some years ago this type of AI technology was purely science fiction. If it goes out of control it can be used for misinformation or for impersonation of people, and so-on. It would be important to have some type of control of AI.
I embrace AI wholeheartedly. Along with the idea of a universal basic income.
Another problem with the deep learning technique used is that it may end up giving a voice to very very obscure and extreme ideology, that would otherwise be ignored by the majority, but ends up being picked up by the large-language models as they go scraping the Internet looking for data to learn from.
This guy is kind of a dope. Taking a mass of information and then using it to figure stuff out is pretty much exactly what humans do too.