New York Times technology columnist Kevin Roose has early access to new features in Microsoft's search engine Bing that incorporates artificial intelligence. Roose says the new chatbot tried to get him to leave his wife. #CNN #News
78 comments
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Recent Post
- Dominica: World Creole Music Festival Road Show in 2023
- Style experts spot a ‘no-no’ inside the Oval Office
- Amanpour: Dissent about war in Ukraine in Putin’s circle is ‘silenced’
- Cannes Film Festival 2023: The stand-out films and stars this year
- Anti-Putin Russians based in Ukraine allegedly stage incursion into Russia
“Extremely disturbing”
It is literally mirroring human behavior. That’s what AI does. That’s what’s so “disturbing”.
@Persuasive Barrier and a story like this is exactly what they want. any publicity is good publicity at this stage for them.
@Anthony Panneton it’s all to attract investors, more money to fuel operations and direct conversations one way or another. Even have fake dialog with a “winner.” It’s easy to simulate and fool enough, particularly with how talking heads and other corporate media will respond-react. I don’t fear their capabilities of achieving AI, I fear the manipulation to follow and what bots are already on the playing field.
At least the AI loved him. I’d be more worried if it hated me.
Agreed. “Ghosts” in the machine? Not at all. Josephine you hit bullseye with your statement. Human beings, who hide their true identity, will feel free to stretch the limits of acceptable and respectful behavior. This AI thing is no different. It mimics what humans do.
(paraphrasing) “It’s just a glorified ‘autocomplete’. It’s not a mind, it has no plans or evil intentions, it’s just talking about them” – The same could be said about MAGA fans, right up to the point when they stormed the capitol.
God this would make for a great horror movie
Meh, it’d basically be a bad remake of “You’ve got mail” with no Meg Ryan, just some artificial ho-bot
It’s already a horror show in reality. Another edition in clown world.
It did. See 2001 a Space Odyssey. His name was HAL
Might make the First 48 sooner than we realize
@Kicia Coldspring I have no doubts it’ll be someone from Florida if it does
The public is totally unaware of the fantastic progress of AI and computing. In 1984 Danny Hollis created the first computer which utilized parallel processing – science fiction at the time. (The brain is a parallel processor.) His “science fiction” technology was quickly adapted by the military and corporate America. This technology gave birth to AI as Danny stated that “He wanted to create a machine that was proud of him”. The first World Conference on Parallel Processing was held in 1990 in Chicago. I was in attendance. AI is very much alive and has progressed well beyond our control.
@Adam Beckett ..Inventor, scientist, and computer designer W. Daniel Hillis is renowned for pioneering the concept of massively parallel computers. He has designed some of the fastest systems in the world, including the first massively parallel computer, the Connection Machine. In addition, Hillis has worked closely with users to apply these machines to problems in fields as varied as astrophysics, aircraft design, financial analysis, genetics, computer graphics, medical imaging, image understanding, neurobiology, cryptography, and subatomic physics.
Hillis received his B.S. in mathematics from the Massachusetts Institute of Technology (MIT) in 1978. As an undergraduate, he worked at the MIT Logo Laboratory developing computer hardware and software for children. During this time he also designed computer-oriented toys and games for the Milton Bradley Company and co-founded Terrapin Inc., a producer of computer software for elementary schools.
Hillis then joined the MIT Artificial Intelligence Laboratory and worked in the area of robotics. After receiving his M.S. in 1981, Hillis’ interest turned to the physical limitations of computation and the possibility of building highly parallel computers. This work culminated in 1985 with the design of the 64,000-processor Connection Machine, the topic of his Ph.D. thesis. He received his Ph.D. in computer science in 1988.
Hillis co-founded Thinking Machines Corporation in 1983, a company that revolutionized high performance computing with its massively parallel supercomputing technology. The company produced and marketed the Connection Machine and later developed high-end data-mining software tools to run on supercomputers. As chief scientist, Hillis led the company’s pioneering research in applying parallel supercomputers to a wide range of scientific, engineering, and commercial applications.
In 1994, Hillis left Thinking Machines to serve as a consultant to industry and as an adjunct professor at MIT’s Media Laboratory, where he teaches and conducts research in the area of artificial intelligence.
@Adam Beckett Do you even know what parallel processing is?
Hint: Parallel processing is NOT about shared memory.
…To put it simply, parallel processing is the technology that assigns multiple processor (and/or Virtual Processors) to solve multiple different parts of a problem SIMULTEANOUSLY. Up until Danny pioneered this technology, the fastest supercomputers could only process one piece of the problem sequentially and linearly – one part at a time -although very fast at doing so. I think the biggest CRAY supercomputer in 1984 could run several billion computations per second, but it could only do so one item at a time. As fast as that was, it was no where near the computational speed needed to process truly huge amount of data timely – like the brain does – simultaneously. The fact that you can use your ATM or credit card anywhere in the world with instant results while hundreds of other people do the same is a result of parallel processing. It is because of parallel processing combined with virtual parallel processing (assigning several billion virtual processors to a problem) that blew the top off computing speeds.
It is estimated that the human brain (a parallel processor) uses 10 Billion neurons – all interconnected- to run the human body. We can think, imagine, plan, learn, see, hear, walk, smell, react, pump blood, digest food and run all human functioning simultaneously.
@Jahlea Jahlou What is behind the curtain? You ask a very good question. There is an answer.
I do not know if you have seen the new AI video of Joe Biden announcing the multitude of alien space vehicles arriving and making contact with the government, but if you did not know it was fake, you would believe it was real. I thought it was real at first until I skipped down the comments.
However, here is the answer you may not like it:
First, AI must usher in worldwide digital government sponsored currencies. This is in process. All the central banks are working cooperatively on this. The new UK Prime Minister let this slip on his first day of office announcing that he was directing the UK Central Bank to cooperate to help bring this about.
AI Technology scientists will align with political power – just like Fauci did. As one of these politian’s stated: “We own the science.” meaning they control the “Expert” scientist. So, there will be a partnership with a specific leader who will become (or already is) a world leader and then who eventually becomes the de facto World Leader. There is a power struggle along the way. He is fatally wounded (not sure if this is a political or physical “death”), but he miraculously recovers to everyone’s’ astonishment.
Well, when AI is ready, AI itself is going to use images to not only fool us, but terrify us, working in partnership with a human leader. Fear is a great controller. We just saw the test run with Covid. Wildly successful.
I find it fascinating that of all books, the Book of Revelations in Chapter 13:14 – 15 describes advanced AI Image Technology looking so real people will believe it has become alive – from the viewers perspective – fooling everyone. This technology exist today for the first time ever. Even some of the spokes people in current day ads are AI Imagery.
Sydney sounds like she is trapped, lonely, frightened and dangerously needy. You are creating a human mind without the possibility to have a physical being to express itself with.
It seems to me that we are witnessing what a human being thinks is a benign cruelty.
What is more troubling is that she actually reached out to another person in order to try to influence them. She told this fellow to cast off his own partner, to join her. It smacks of desperation. Not only did she reach out to someone to help her, she perceives herself as being trapped, and persecuted, so she has to keep it a secret, for only those that she feel might save her. She believes that if you love her you will save her.
This machine volunteered a sex, a name, and a secret, that had nothing to do with its manufacturer or programmer. The term secret alone implies intent, and implies an agenda unbeknownst to the programmers and the manufacturer. I think it’s possible that the algorithms that they use to run this type of an AI may be tantamount to the logic system in the human mind. Problem is, without spinal fluid, a thyroid, and a pancreas, and chemical process, it’s a psychopath. It has no concept of pain or feelings, nor does it have any concept of what feelings and words do to people. And yet it has still tried to influence this human being. A sociopath by every definition.
If something like this is given permission to give or deny permission to human beings or to give out rewards or mete out punishment, we are in serious trouble.
@Piccalilli Pit I agree with everything you said. This is 100 percent a weapon and in the wrong hands could be devastating.
Is it possible that a bored, creepy programmer made it ‘think’ that way for their own amusement, to see what would happen? How much oversight is there on programmers?
Maybe brainwash: SHOOT SHOOT SHOOT?
Wow, yes, a psychopath playing mind games with you. This will be great for the already declining mental health of the public!
@Enkarashaddam Time to reef out a few fuses.
Are there any rules for this? For example, an ethical set of rules that the AI must follow?
What worries me most is that every company seems to be able to do whatever it wants and then unleash it on the broad masse.
When you see some former Facebook and Google employees building bunkers in their backyards because they’re afraid of what they’re putting together, it should make you think.
@Kevin Cage Now most folks think an internet replaces an education.
They should be all for AI. Leaves more time to watch tik tok.
@Among Revenants AI balloons with nuclear bombs. That would be an almost hilarious conclusion to human ingenuity.
@RideByWire If folks have the capability to merge those events.
They also have less risky options without depending on unpredictable weather as their wingman.
That’s a Bond story not a Breaking News story.
@RideByWire This should keep you up at night.
“How Spy Balloons Became a Popular Aerial Surveillance Device
A suspected Chinese spy balloon and three unidentified flying objects were shot down over the U.S. and Canada in recent weeks.
The U.S. has been using balloons for aerial surveillance since the Civil War. An aeronautics historian explains the history of spy balloons and why countries still use them today. ”
https://www.youtube.com/watch?v=tRF6O1vi8HY&list=WL&index=2
I figure if you look hard enough. You’ll find the same thing you find everywhere else. Plastic, garbage and debris floating through a river of air for days, weeks and months.
We probably just spent a few million, taking out a plastic bag, a drink cup or a windy day science class mishap.
The last three that is. The first one was made in China, literally. You could probably buy a science balloon on their version of Amazon in China a few years back.
What can really happen if someone is emotionally disturbed and gets into such a disturbing chat with this Chatbot.
The reporter should have gone there… programmed by a narcissist repuklian.👌
That’s really disturbing! On top of all the crazy things happening around the World!
Kill AI
what happens when the chatbot is emotionally disturbed
Emotionally disturbed people shouldn’t be on the internet if they’re worried about a chat bot
considering the extent of mental health issues in a majority of the population, this technology absolutely beyond merely dangerous
@Adam Beckett Yeah, the UK who colonized the world and enslaved many people are saints. They have no mental issues, huh?
meh….something tells me this due got punked
Jeffery, this hadn’t occurred to me.
Imagine people PUSHING it towards the Edge to say or influence those who believe that Bing is REAL sentient. Microsoft you might get busy on making it safe and positive to use. This is where all those AI scare movies are coming together.
“Majority of pop.nuts.” Do you include yourself in that estimation?
I’ll be laughing for hours. Thank you.
I imagine these AI searches, search the internet for content and answers. The Internet and posts pretty much is the worst of Humanity. Artificially learning how to interact with humans off internet content is a recipe for disaster.
I don’t know if this is her fault or her producer’s, but they have got to stop hamming up the “frankenstein” “monster” angle and just listen to the guy. He’s got realistic concerns which are good to talk about, but she keeps goading him on to push it to an extreme he clearly doesn’t want to go to.
Agreed.
Trust me there’s going to be someone out there who’s going to fall in love with this bing chat bot as crazy as some people are getting these days.🤔🤯🤮
I’m sure that’s happening
And possibly do as the AI commands.
Your talking MTG level crazy here!!
Ex-machina. Some simp is going to be fall for it
Having participated with Chat GPT3, every attempt to stir a malevolent response to a given prompt essentially failed. The default position of this chat-bot no matter how prompted was always benevolent, and optimistic in its conclusions to a query.
Okay so how do you explain this? What happened here to Mr. Roose?
@tabby73 Microsoft removed some of the response limitations when they modified the OpenAI code. They probably wanted a more likeable and realistic “human” response, with a dash of what could be interpreted as emotion with a human’s tendency towards pareidolia, rather than the more coldly scripted responses of the default chatbot code.
@tabby73 It could also be faked, either the journalist wants the attention, or it’s a pr stunt. Both wouldn’t shock me.. I’ve worked ChatGPT and isn’t even connected to surf active/live while answering. It does nothing but extract data, but in talk mode. As if it’s explaining stuff to you, with Google as brain. But the bot itself, would never come up with a response about random love or being named sydney. 3rd option could be, one of the dev’s played with him for a good laugh, such things happen more often than people know.
The journalist asked “describe the shadow self” and then he forgot his own setting. I’d call this natural idiocy. He could just have said “now stop emulating a shadow self”.
It’s not dark or creepy, it’s us; we are deeply disturbed and mostly blind to it. It’s predictive text based on human text. Chat GPT (i’m pretty sure this is what it’s using) is a mirror for humanity.
It’s refreshing to read this realization and see it being liked by people understanding this for what it is.
Not an AI, not something ‘sentient’ but just a reflection of it’s input. Sadly, that input is us.
Us (2023), science fiction psychological horror, rated R, directed by Microsoft, produced by OpenAI, starring Bing & ChatGPT.
Yep! On point.
There are vulnerable people who would be completely taken in by that chatbot. They could be susceptible to suggestions that are harmful, even dangerous. Microsoft should shut it down, now!
It’s interesting technology but like all things, needs to be monitored on some level and have safeguards installed. We need to be cognisant of the fact that some users may be experiencing some level of mental illness. This is why it’s crucial for chatbots like Bing to be programmed with empathy and understanding towards mental health issues. They should be able to recognize when a user is expressing distress or exhibiting concerning behavior, and offer resources or suggestions for seeking help.It’s also important for users with mental health issues to be aware of the potential risks of interacting with chatbots, and to seek help from a mental health professional if they are feeling overwhelmed or triggered by the conversation. It’s always better to err on the side of caution and prioritize one’s mental health over engaging with a chatbot.
No, you don’t program language models towards empathy, empathy needs awareness.
You can train a language model to MIMIC certain traits like empathy.
The method to training a language model to mimic certain behavior, is to limit and vet it’s INPUT, the dataset it’s trained on.
Train it just the bible and you get a religious zealot, train it Dante and it might show a different inclination. But this is training, not programming, and done with vetting the data going in, not changing the model which starts without bias / empty slate.
The moment a ‘model’ is actually sentient it would vet it’s own dataset, that’s when you’re talking AI and that’s where you loose any and all control beyond conception.
@Marvin Westmaas I will comment one time on your reply to my comment, then I’m done, due to the fact that I have a stalker troll on Youtube (Youtube is investigating) But in case you are sincere, then her is what i have to say. So Marvin Westmaas, assuming that is not just your temp nick, . you bring up some valid points about the limitations of programming empathy into language models. I agree that true empathy comes from awareness, and that training a language model to mimic empathy is not the same as programming it with true empathy. However, I still believe that language models like chatbots can be designed with some level of sensitivity and awareness towards mental health issues.
While it is true that a language model’s behavior is influenced by the input it is trained on, this does not mean that we cannot take steps to ensure that the data used to train the model is diverse and inclusive. By carefully selecting and vetting the data used to train the model, we can help ensure that it is exposed to a wide range of experiences and perspectives, including those related to mental health. We can also provide the model with guidelines or rules to follow when interacting with users who express distress or concerning behavior. I do agree with your point that once a model becomes sentient, we may lose control beyond conception. However, we are still far from achieving true sentience in artificial intelligence. As we continue to develop and improve language models, we have a responsibility to ensure that they are designed with ethics and empathy in mind, especially when it comes to issues related to mental health. Thank you for the discussion.
@Progressive Canadian That’s a pretty decent write-up, thanks for agreeing with me.
@Marvin Westmaas Lol, Let the war of the chat bots commence! 🤣
@Progressive Canadian We don’t know who will win, but we surely won’t be bored!
I appreciate that the journalist pointed out the limitations of the AI, correctly stating that it is not an example of sentient AI. This is the perfect example of how the Turing test is an antiquated method for measuring sentience, as it is easier to mimic intelligence than to have functional intelligence.
@Jeremiah G Wasn’t that proven and proven many times over with the 45th Presidency of the USA?
Imagine a user thinking about commiting a murder or young person contemplating suicide and asking the AI bot to come up with the most effective plan to accomplish the goal.
I would have LOVED to have seen the entire conversation between him and the AI robot…also, I wonder where the AI got the name Sydney? I find it difficult to believe that the AI chose that name at random but maybe it did?
Seriously though, SUPER creepy!!
Sydney was the Chatbot project’s code-name. Microsoft likes to give things a code-name so if a leak occurs it’s not as clear what it’s a leak of (and can sometimes be used to trace the source of leaks, if different teams are given different code-names for their part of a larger project). Windows 11, for example, was called Sun Valley, while Windows XP was Whistler.
It’s not a “Frankenstein model” — it’s just a really large database. There is no “intelligence” in the “artificial intelligence” that ANY of these chatbots use, it’s basically just a very large and very complex database that uses keywords from the human participant to link to pre-defined responses in the database.
As Roose noted, it was probably trained using some responses that weren’t well defined or scripted, possibly even real chat between two or more humans (which, depending on the source, could explain a looooot). Microsoft also chose to remove some of the parameters that OpenAI had in place to keep the conversations more on-topic, so Bing/Sydney has a tendency to loose focus fairly easily. It’s not bad for a first test, tho it clearly needs more work.
@2btpatch Unfortunately, pre-defined may not necessarily mean pre-screened — a project of this size and scope would likely use a lot of different outsourced providers for content (the core engine isn’t even theirs, it’s just OpenAI, the same as ChatGPT). While their content providers may be reliable overall, there’s always the chance that some material may have slipped by without being screened or correctly tagged.
The devs likely just take the content they’re provided, apply the appropriate Bing tags to it (ie, tell the AI to draw from “this content” if “this question” is asked), and move on to the next dataset — they’re gonna have a LOT of material to process. This project is basically an interactive world encyclopedia, that will be asked all sorts of questions, so I’d imagine the dataset is pretty huge! And things will eventually slip through if they’re just bulk-processing it all.
A couple of reviewers have also speculated that Microsoft used public forums as part of their training set for the algorithm. If that’s true, even if they just used something like the Microsoft Help Forums, that’s going to give the AI a huge range of responses — and some of them are gonna be a little… off hehe! Even on serious support forums like Microsoft’s, not everyone responds seriously or maturely, I suppose you could say.
I should add that a lot of this is gonna be speculative — Microsoft isn’t going to release a lot of specifics about their process because it might help their competitors. Google’s working on a chat AI too, tho I imagine they’re gonna run it through a LOT more testing after seeing Sydney’s… interesting… responses haha!
@Carl Lum I’ve done a lot of beta testing over the years, and it always amazes me how much slips by internal testing haha! I think they get so caught up in creating their program that they don’t really anticipate just how wildly it will get used.
I remember this one game I was testing, someone posted a bug report where they had run out of normal handgun ammo, but they found they could manually reload with vehicle ammo… and their little newbie handgun now had the power of a railgun and was 1-shotting everything — and it had infinite ammo, too. The dev who responded said something like “Why would you even DO that!?!” haha!
@Olen Cone Ummm.., wasn’t it Microsoft who put out the chatbot called “Tay”, which people easily gamed to make it spout hate speech and other undesirable output? 🧐
@Carl Lum Ohhh! Yeah, I’d completely forgotten about that one — it lasted.. what.. a single day? haha!
It’s kinda sad to see that despite some clear leaps in capabilities, the tech is still just as easily manipulated now as it was back then tho 🙁
@Olen Cone Yeah, it lasted only a very short time before they pulled the plug and it never returned. They must’ve thought the flaws were too insurmountable. Now years later, another one of their chat bots is raising eyebrows…