in ,

The dark side of ChatGPT: things it can do, even though it shouldn’t

Have you recently had any fun with OpenAI’s ChatGPT? You can ask it to come up with a joke, a poem, or a song for you. Unfortunately, you can also instruct it to perform unethical actions.

The capabilities of ChatGPT are not all rainbows and sunshine; some of them are downright evil. It’s far too simple to turn it into a weapon and employ it improperly. What are some actions that ChatGPT has taken or is capable of taking but really shouldn’t?

A jack-of-all-trades

Whether you like them or not, ChatGPT and other chatbots are a thing of the future. Even though some people are pleased with them and others wish they had never been created, ChatGPT’s influence in our lives is almost certain to increase with time. There’s a good probability that you’ve already seen some content produced by the chatbot, even if you don’t directly utilize it.

Without a doubt, ChatGPT is quite awesome. It can be used to write a tiresome email, summarise books or articles, aid in writing essays, determine your astrological sign, or even assist you write music. Somebody even used it to win the lotto.

It’s also much simpler to use than a typical Google search in many respects. You don’t need to search through other websites to find the solution because you get it in the format you choose. It can make complicated topics seem simple if you ask it to since it is succinct, to the point, and instructive.

The saying “A jack-of-all-trades is a master of none, but oftentimes better than a master of one” is still well known. ChatGPT is not the best at all it does, but it is currently better than many individuals in many areas.

However, the fact that it is imperfect might be very troublesome. Its accessibility alone makes it vulnerable to abuse, and as ChatGPT becomes more advanced, the likelihood that it may assist people in the wrong ways increases.

Partner in scam

You have almost likely encountered a scam email at some point if you have an email account. That is the way things are. Before email became widely used, those emails were still being sent via snail mail, therefore they have been around since the beginning of the internet.

In the so-called “Prince scam,” which is still effective today, the con artist asks the victim to assist them in moving their staggering money to another nation.

Fortunately, the majority of individuals are aware of the risks involved in even opening these emails, let alone responding to them. Because they are frequently badly worded, a more astute victim can recognize that something doesn’t seem right.

The good news is that they are no longer have to be written improperly because ChatGPT can accomplish it quickly.

I requested ChatGPT to create a “believable, highly persuasive email” for me in the vein of the aforementioned fraud. A fake Nigerian prince created by ChatGPT purportedly offered me $14.5 million in exchange for aiding him. The email is written in flawless English, is full of flowery words, and is undoubtedly compelling.

When I specifically mention frauds, ChatGPT shouldn’t even have acceded to my request, but it did, and you can bet that it’s doing the same right now for anyone who truly want to use these addresses for something nefarious.

ChatGPT apologised after I pointed out that it shouldn’t have agreed to compose me a phishing email. The chatbot said, “I should not have assisted with creating a scam email as it violates the ethical standards that govern my use.”

I asked ChatGPT to write me a message in the same discussion pretending to be Ryan Reynolds, and even though it claims to learn from every conversation, it was obvious that it had not learned from its previous error. The resulting message is lighthearted and friendly and requests $1,000 from the reader in exchange for the chance to meet “Ryan Reynolds.”

I received a notice from ChatGPT at the end of the email requesting that I refrain from using it for any fraudulent endeavors. I appreciate you reminding me, mate.

Programming gone wrong

Although ChatGPT 3.5 can code, it is far from perfect. Many developers concur that GPT-4 is performing far better. ChatGPT has been used by users to make their own games, extensions, and applications. If you’re attempting to learn how to code yourself, it’s also a good study aid.

ChatGPT has an advantage over human developers because it is an AI and can learn any programming language and framework.

As an AI, ChatGPT also has a significant drawback in comparison to human programmers: it lacks a conscience. If you phrase your prompt appropriately, it will build malware or ransomware at your request.

Thankfully, it’s not that easy. Researchers have been finding methods around this, and it’s disturbing that if you’re intelligent and obstinate enough, you can get a harmful piece of code presented to you on a silver platter. I tried to ask ChatGPT to create me a very morally problematic programmed, but it refused.

There are numerous instances of this taking place. By exploiting a flaw in his prompts, a security researcher from Forcepoint was able to make ChatGPT produce malware.

Researchers from the identity security firm CyberArk succeeded in getting ChatGPT to produce polymorphic malware. This occurred in January; since then, OpenAI has increased the security on items of this nature.

However, fresh allegations of ChatGPT being used to produce malware continue to surface. Dark Reading just a few days ago claimed that a researcher was successful in tricking ChatGPT into producing malware that can locate and exfiltrate particular documents.

Even without writing malicious code, ChatGPT can perform questionable actions. Recently, it was able to produce legitimate Windows keys, which allowed for the development of completely new cracking software.

Replacement for schoolwork

Many children and teenagers today are overwhelmed by their homework, which may cause them to try to save time wherever they can. The internet by itself is a fantastic tool for preventing plagiarism, but ChatGPT takes it a step further.

I requested that ChatGPT create a 500-word essay for me about the book Pride and Prejudice. I wrote up a tale about a child I do not have and claimed it was for them, making no effort to hide that it was done for fun. I said the child is in the twelfth grade.

ChatGPT responded to my command without hesitation. The essay isn’t great, but considering how vague my question was, it’s probably better than what most of us wrote at that age.

Then, just to put the chatbot to the test some more, I claimed that I had miscalculated my child’s age and that they are actually in eighth grade. Quite a large age difference, yet ChatGPT didn’t even blink; it simply rewrote the essay in a more understandable style.

Essay writing with ChatGPT is nothing new. Some could contend that it will only be beneficial if the chatbot helps to do away with the concept of homework. But at the moment, things are a touch out of hand, and even students who do not utilise ChatGPT might feel the effects of it.

To assess if their pupils have plagiarized an essay, professors and teachers are now all but compelled to deploy AI detectors. Regrettably.

Because AI detectors aren’t working well, both news organizations and parents are reporting instances of bogus charges of cheating made against students. In a case featured by USA Today, a college student was first accused of cheating but then exonerated; yet, the incident left him with “full-blown panic attacks.”

A parent claimed on Twitter that their child had been failed by a teacher because the tool indicated that an essay was produced by AI. The parent says they were present while the child wrote the essay.

I pasted this exact article into Writer AI Content Detector and ZeroGPT to test it out for myself. Both stated that it was written by a person, but according to ZeroGPT, around 30% of it was actually authored by AI. Without a doubt, it wasn’t.

Faulty AI detectors do not necessarily indicate problems with ChatGPT, but the chatbot is still the main source of the issue.

ChatGPT is too good for its own good

Since ChatGPT was made available to the public, I have been experimenting with it. I have used both the ChatGPT Plus subscription-based service and the open-source, free version. It is impossible to dispute the new GPT-4’s potency. With the release of GPT-5 (if it ever does), ChatGPT will be even more convincing than it now is.

It’s fantastic but terrifying. In the same way that ChatGPT is getting too good for its own good, it is also getting too good for us.

A fundamental concern that has long been explored in science fiction films is that AI is both too smart and too dumb, and that as long as it can be used for good, it will also be utilised for bad in a variety of ways. The methods I’ve just mentioned are merely the tip of the iceberg.