ChatGPT Archives - Innovation Discoveries https://power2innovate.com/tag/chatgpt/ Latest Scientific Discoveries in Innovation Fri, 01 Mar 2024 15:49:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://power2innovate.com/wp-content/uploads/2022/07/cropped-news-report-32x32.png ChatGPT Archives - Innovation Discoveries https://power2innovate.com/tag/chatgpt/ 32 32 How will AIs like ChatGPT affect elections this year? https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/ https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/#respond Fri, 01 Mar 2024 15:49:31 +0000 https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/ More than half the world’s population, including India, the US and UK, will have the chance to vote for new governments in 2024 WOJTEK RADWANSKI/AFP via Getty Images THE biggest election year in the history of the world is under way, and we have just had our first glimpse at how artificial intelligence is being …

The post How will AIs like ChatGPT affect elections this year? appeared first on Innovation Discoveries.

]]>

A woman holding several small flags of the United States checks her mobile phone while waiting with other onlookers prior to the arrival of the US President in Warsaw on February 21, 2023. - US President Biden is due to deliver a speech in Warsaw later on February 21 at Royal Warsaw Castle Gardens. (Photo by Wojtek Radwanski / AFP) (Photo by WOJTEK RADWANSKI/AFP via Getty Images)

More than half the world’s population, including India, the US and UK, will have the chance to vote for new governments in 2024

WOJTEK RADWANSKI/AFP via Getty Images

THE biggest election year in the history of the world is under way, and we have just had our first glimpse at how artificial intelligence is being used by the shadowy world of government-backed hackers. These groups could have significant impacts on democratic processes through hacking, disinformation or leaking sensitive information. In February, Microsoft and OpenAI published a blog in which they identified groups affiliated with…


Source link

The post How will AIs like ChatGPT affect elections this year? appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/feed/ 0
ChatGPT can tailor political ads to match users' personalities https://power2innovate.com/chatgpt-can-tailor-political-ads-to-match-users-personalities/ https://power2innovate.com/chatgpt-can-tailor-political-ads-to-match-users-personalities/#respond Tue, 20 Feb 2024 10:12:53 +0000 https://power2innovate.com/chatgpt-can-tailor-political-ads-to-match-users-personalities/ Generative AI can rewrite political adverts on social media to target users with different personality types, making it easier to manipulate elections using personal data on a large scale Source link

The post ChatGPT can tailor political ads to match users' personalities appeared first on Innovation Discoveries.

]]>

Generative AI can rewrite political adverts on social media to target users with different personality types, making it easier to manipulate elections using personal data on a large scale


Source link

The post ChatGPT can tailor political ads to match users' personalities appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/chatgpt-can-tailor-political-ads-to-match-users-personalities/feed/ 0
ChatGPT wrote code that can make databases leak sensitive information https://power2innovate.com/chatgpt-wrote-code-that-can-make-databases-leak-sensitive-information/ https://power2innovate.com/chatgpt-wrote-code-that-can-make-databases-leak-sensitive-information/#respond Thu, 26 Oct 2023 00:30:45 +0000 https://power2innovate.com/chatgpt-wrote-code-that-can-make-databases-leak-sensitive-information/ A vulnerability in Open AI’s ChatGPT – now fixed – could have been used by malicious actors Rokas Tenys/Shutterstock Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration. The work has …

The post ChatGPT wrote code that can make databases leak sensitive information appeared first on Innovation Discoveries.

]]>

OpenAI ChatGPT application chatbot

A vulnerability in Open AI’s ChatGPT – now fixed – could have been used by malicious actors

Rokas Tenys/Shutterstock

Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration.

The work has already led the companies responsible for some of the AI tools – including Baidu and OpenAI – to implement changes to prevent malicious users from taking advantage of the vulnerabilities.

“It’s the very first study to demonstrate that vulnerabilities of large language models in general can be exploited as an attack path to online commercial applications,” says Xutan Peng, who co-led the study while at the University of Sheffield in the UK.

Peng and his colleagues looked at six AI services that can translate human questions into the SQL programming language, which is commonly used to query computer databases. “Text-to-SQL” systems that rely on AI have become increasingly popular – even standalone AI chatbots, such as OpenAI’s ChatGPT, can generate SQL code that can be plugged into such databases.

The researchers showed how this AI-generated code can be made to include instructions to leak database information, which could open the door to future cyberattacks. It could also purge system databases that store authorised user profiles, including names and passwords, and overwhelm the cloud servers hosting the databases through a denial-of-service attack. Peng and his colleagues presented their work at the 34th IEEE International Symposium on Software Reliability Engineering on 10 October in Florence, Italy.

Their tests with OpenAI’s ChatGPT back in February 2023 revealed the standalone AI chatbot could generate SQL code that damaged databases. Even someone using ChatGPT to generate code in order to query a database for an innocent purpose – such as a nurse interacting with clinical records stored in a healthcare system database – might actually be given harmful SQL code that damaged the database.

“The code generated from these tools may be dangerous, but these tools may not even warn the user,” says Peng.

The researchers disclosed their findings to OpenAI. Their follow-up testing suggests that OpenAI has now updated ChatGPT to shut down the text-to-SQL issues.

Another demonstration showed similar vulnerabilities in Baidu-UNIT, an intelligent dialogue platform offered by the Chinese tech giant Baidu that automatically converts client requests written in Chinese into SQL queries for Baidu’s cloud service. After the researchers sent a disclosure report with their testing results to Baidu in November 2022, the company gave them a financial reward for finding the weaknesses and patched the system by February 2023.

But unlike ChatGPT and other AIs that rely on large language models – which can perform new tasks without much or any prior training – Baidu’s AI-powered service leans more heavily on prewritten rules to carry out its text-to-SQL conversions.

Text-to-SQL systems based on large language models seem to be more easily manipulated into creating malicious code than older AIs that rely on prewritten rules, says Peng. But he still sees promise in using large language models for helping humans query databases, even if he describes the security risks as having “long been underrated before our study”.

Neither OpenAI nor Baidu responded to a New Scientist request for comment on the research.

Topics:


Source link

The post ChatGPT wrote code that can make databases leak sensitive information appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/chatgpt-wrote-code-that-can-make-databases-leak-sensitive-information/feed/ 0
Scientists prefer feedback from ChatGPT to judgement by peers https://power2innovate.com/scientists-prefer-feedback-from-chatgpt-to-judgement-by-peers/ https://power2innovate.com/scientists-prefer-feedback-from-chatgpt-to-judgement-by-peers/#respond Wed, 18 Oct 2023 12:31:45 +0000 https://power2innovate.com/scientists-prefer-feedback-from-chatgpt-to-judgement-by-peers/ Peer-reviewing research is an important component of modern science PolyPloiid/Shutterstock ChatGPT can provide researchers with useful feedback on their papers, suggesting it could supplement the human peer review process that helps scientific journals decide which studies to publish. But others are sceptical that ChatGPT could play a role in peer review. Peer review is a …

The post Scientists prefer feedback from ChatGPT to judgement by peers appeared first on Innovation Discoveries.

]]>

Scientific journals

Peer-reviewing research is an important component of modern science

PolyPloiid/Shutterstock

ChatGPT can provide researchers with useful feedback on their papers, suggesting it could supplement the human peer review process that helps scientific journals decide which studies to publish. But others are sceptical that ChatGPT could play a role in peer review.

Peer review is a critical component of scientific publishing. However, many journals and research conferences are struggling to recruit enough human peer reviewers – who typically volunteer their time for free – to evaluate the growing number of submitted papers.

“It’s getting harder …


Source link

The post Scientists prefer feedback from ChatGPT to judgement by peers appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/scientists-prefer-feedback-from-chatgpt-to-judgement-by-peers/feed/ 0
ChatGPT gets better marks than students in some university courses https://power2innovate.com/chatgpt-gets-better-marks-than-students-in-some-university-courses/ https://power2innovate.com/chatgpt-gets-better-marks-than-students-in-some-university-courses/#respond Thu, 24 Aug 2023 19:19:46 +0000 https://power2innovate.com/chatgpt-gets-better-marks-than-students-in-some-university-courses/ Students may be able to use ChatGPT to help with their university assignments, but the chatbot can lack critical analysis skills Angel Garcia/Bloomberg via Getty Images ChatGPT may be as good as or better than students at assessments in around a quarter of university courses. However, this generally only applies to questions with a clear …

The post ChatGPT gets better marks than students in some university courses appeared first on Innovation Discoveries.

]]>

Students may be able to use ChatGPT to help with their university assignments, but the chatbot can lack critical analysis skills

Angel Garcia/Bloomberg via Getty Images

ChatGPT may be as good as or better than students at assessments in around a quarter of university courses. However, this generally only applies to questions with a clear answer that require memory recall, rather than critical analysis.

Yasir Zaki and his team at New York University Abu Dhabi in the United Arab Emirates contacted colleagues in other departments asking them to provide assessment questions from courses taught at the university, including computer science, psychology, political science and business.

These colleagues also provided real student answers to the questions. The questions were then run through the artificial intelligence chatbot ChatGPT, which supplied its own responses.

Next, both sets of responses were sent to a team of graders. “These graders were not made aware of the sources of these answers, nor were they aware of the purpose of the grading,” says Zaki.

In nine out of the 32 courses surveyed, ChatGPT’s answers were rated as good as or better than those of students. At times, its answers were substantially better. For example, it achieved almost double the average score of students when answering questions from a course called Introduction to Public Policy.

“ChatGPT performed much better on questions that required information recall, but performed poorly on questions which required critical analysis,” says Zaki.

The results highlight an issue with the way university assessments are set, says Thomas Lancaster at Imperial College London. They should probe students’ critical thinking, which may not be achieved by ChatGPT. “If [better answers are] possible [with ChatGPT], it suggests that there are flaws in the assessment design.”

Lancaster also says that many of the assessments that are susceptible to being cheated on via ChatGPT could have been vulnerable to existing contract cheating services. This is where students pay professional essay writers to do their work, but they may similarly not perform critical analysis.

Separately, Zaki and his team surveyed academics and students in the UK, US, India, Japan and Brazil about their attitudes towards ChatGPT. Across all of the countries, the students were more likely to say that they would use the chatbot than the academics thought they would.

Topics:


Source link

The post ChatGPT gets better marks than students in some university courses appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/chatgpt-gets-better-marks-than-students-in-some-university-courses/feed/ 0
AI news recap for July: While Hollywood strikes, is ChatGPT getting worse? https://power2innovate.com/ai-news-recap-for-july-while-hollywood-strikes-is-chatgpt-getting-worse/ https://power2innovate.com/ai-news-recap-for-july-while-hollywood-strikes-is-chatgpt-getting-worse/#respond Fri, 28 Jul 2023 10:29:04 +0000 https://power2innovate.com/ai-news-recap-for-july-while-hollywood-strikes-is-chatgpt-getting-worse/ Hollywood actors are concerned about AI Jim Ruymen/UPI Credit: UPI/Alamy Hollywood actors strike over use of AI in films and other issues Artificial intelligence can now create images, novels and source code from scratch. Except it isn’t really from scratch, because a vast amount of human-generated examples are needed to train these AI models – …

The post AI news recap for July: While Hollywood strikes, is ChatGPT getting worse? appeared first on Innovation Discoveries.

]]>

2RCFK7R Los Angeles, United States. 14th July, 2023. Members of the SAG-AFTRA actors union join writers on the picket lines, marking the first time in 63 years both unions have been on strike at the same time, with many observer fearing the labor impasse could last for months as the sides remain far apart on key issues. The range of issues include pay and the use of artificial intelligence. Photo by Jim Ruymen/UPI Credit: UPI/Alamy Live News

Hollywood actors are concerned about AI

Jim Ruymen/UPI Credit: UPI/Alamy

Hollywood actors strike over use of AI in films and other issues

Artificial intelligence can now create images, novels and source code from scratch. Except it isn’t really from scratch, because a vast amount of human-generated examples are needed to train these AI models – something that has angered artists, programmers and writers and led to a series of lawsuits.

Hollywood actors are the latest group of creatives to turn against AI. They fear that film studios could take control of their likeness and have them “star” in films without ever being on set, perhaps taking on roles they would rather avoid and uttering lines or acting out scenes they would find distasteful. Worse still, they might not get paid for it.

That is why the Screen Actors Guild and the American Federation of Television and Radio Artists (SAG-AFTRA) – which has 160,000 members – is on strike until it can negotiate AI rights with the studios.

At the same time, Netflix has come under fire from actors for a job listing for people with experience in AI, paying a salary up to $900,000.

Today?s large-scale image training data sets contain synthetic data from generative models. Researchers found these images using simple queries on haveibeentrained.com. Generative models trained on the LAION-5B data set are thus closing an autophagous (self-consuming) loop that can lead to progressively amplified artifacts, lower quality and diversity and other unintended consequences.

The quality of AI-generated images may degrade over time

Rice University

AIs trained on AI-generated images produce glitches and blurs

Speaking of training data, we wrote last year that the proliferation of AI-generated images could be a problem if they ended up online in great numbers, as new AI models would hoover them up to train on. Experts warned that the end result would be worsening quality. At the risk of making an outdated reference, AI would slowly destroy itself, like a degraded photocopy of a photocopy of a photocopy.

Well, fast-forward a year and that seems to be precisely what is happening, leading another group of researchers to make the same warning. A team at Rice University in Texas found evidence that AI-generated images making their way into training data in large numbers slowly distorted the output. But there is hope: the researchers discovered that if the amount of those images was kept below a certain level, then this degradation could be staved off.

ChatGPT can get its sums wrong

Tada Images/Shutterstock

Is ChatGPT getting worse at maths problems?

Corrupted training data is just one way that AI can start to fall apart. One study this month claimed that ChatGPT was getting worse at mathematics problems. When asked to check if 500 numbers were prime, the version of GPT-4 released in March scored 98 per cent accuracy, but a version released in June scored just 2.4 per cent. Strangely, by comparison, GPT-3.5’s accuracy seemed to jump from just 7.4 per cent in March to almost 87 per cent in June.

Arvind Narayanan at Princeton University, who in another study found other changing performance levels, puts the problem down to “an unintended side effect of fine-tuning”. Basically, the creators of these models are tweaking them to make the outputs more reliable, accurate or – potentially – less computationally intensive in order to cut costs. And although this may improve some things, other tasks might suffer. The upshot is that, while AI might do something well now, a future version might perform significantly worse, and it may not be obvious why.

Bigger data isn’t always better

Vink Fan/Shutterstock

Using bigger AI training data sets may produce more racist results

It is an open secret that a lot of the advances in AI in recent years have simply come from scale: larger models, more training data and more computer power. This has made AIs expensive, unwieldy and hungry for resources, but has also made them far more capable.

Certainly, there is a lot of research going on to shrink AI models and make them more efficient, as well as work on more graceful methods to advance the field. But scale has been a big part of the game.

Now though, there is evidence that this could have serious downsides, including making models even more racist. Researchers ran experiments on two open-source data sets: one contained 400 million samples and the other had 2 billion. They found that models trained on the larger data set were more than twice as likely to associate Black female faces with a “criminal” category and five times more likely to associate Black male faces with being “criminal”.

Athena AI drone

AI can identify targets

Athena AI

Drones with AI targeting system claimed to be ‘better than human’

Earlier this year we covered the strange tale of the AI-powered drone that “killed” its operator to get to its intended target, which was complete nonsense. The story was quickly denied by the US Air Force, which did little to stop it being reported around the world regardless.

Now, we have fresh claims that AI models can do a better job of identifying targets than humans – although the details are too secretive to reveal, and therefore verify.

“It can check whether people are wearing a particular type of uniform, if they are carrying weapons and whether they are giving signs of surrendering,” says a spokesperson for the company behind the software. Let’s hope they are right and that AI can make a better job of waging war than it can identifying prime numbers.

If you enjoyed this AI news recap, try our special series where we explore the most pressing questions about artificial intelligence. Find them all here:

How does ChatGPT work? | What generative AI really means for the economy | The real risks posed by AI | How to use AI to make your life simpler | The scientific challenges AI is helping to crack | Can AI ever become conscious?

Topics:


Source link

The post AI news recap for July: While Hollywood strikes, is ChatGPT getting worse? appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/ai-news-recap-for-july-while-hollywood-strikes-is-chatgpt-getting-worse/feed/ 0
How does ChatGPT work and do AI-powered chatbots “think” like us? https://power2innovate.com/how-does-chatgpt-work-and-do-ai-powered-chatbots-think-like-us/ https://power2innovate.com/how-does-chatgpt-work-and-do-ai-powered-chatbots-think-like-us/#respond Tue, 25 Jul 2023 17:13:30 +0000 https://power2innovate.com/how-does-chatgpt-work-and-do-ai-powered-chatbots-think-like-us/ The current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has …

The post How does ChatGPT work and do AI-powered chatbots “think” like us? appeared first on Innovation Discoveries.

]]>

2R59MEN ChatGPT, chatbots and AI

The current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has been hard not to wonder if this heralds a turning point of some sort.

There has been no shortage of hype. Microsoft researchers given early access to GPT4, the latest version of the system behind ChatGPT, argued that it has already demonstrated “sparks” of the long-sought machine version of human intellectual ability known as artificial general intelligence (AGI). One Google engineer even went so far as to claim that one of the company’s AIs, known as LaMDA, was sentient. The naysayers, meanwhile, insist that these AIs are nowhere near as impressive as they seem.

All of which can make it hard to know quite what you should make of the new AI chatbots. Thankfully, things quickly become clearer when you get to grips with how they work and, with that in mind, the extent to which they “think” like us.

At the heart of all these chatbots is a large language model (LLM) – a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together.

LLMs are created by feeding huge amounts of text to a class of algorithms called deep neural networks, which are loosely inspired by the brain. The models learn complex linguistic patterns by playing a simple game: …


Source link

The post How does ChatGPT work and do AI-powered chatbots “think” like us? appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/how-does-chatgpt-work-and-do-ai-powered-chatbots-think-like-us/feed/ 0
GPT-4: Is the AI behind ChatGPT getting worse? https://power2innovate.com/gpt-4-is-the-ai-behind-chatgpt-getting-worse/ https://power2innovate.com/gpt-4-is-the-ai-behind-chatgpt-getting-worse/#respond Mon, 24 Jul 2023 22:15:14 +0000 https://power2innovate.com/gpt-4-is-the-ai-behind-chatgpt-getting-worse/ ChatGPT is getting worse at some tasks MauriceNorbert/Alamy The AI powering ChatGPT may provide completely different answers to the same mathematical problems over time. Those findings from recent experiments have fuelled an ongoing debate about whether the AI chatbot’s performance is getting worse – and have spurred the firm behind it, OpenAI, to reassure customers …

The post GPT-4: Is the AI behind ChatGPT getting worse? appeared first on Innovation Discoveries.

]]>

ChatGPT is getting worse at some tasks

ChatGPT is getting worse at some tasks

MauriceNorbert/Alamy

The AI powering ChatGPT may provide completely different answers to the same mathematical problems over time. Those findings from recent experiments have fuelled an ongoing debate about whether the AI chatbot’s performance is getting worse – and have spurred the firm behind it, OpenAI, to reassure customers that applications built on ChatGPT will not continually break.

“The takeaway message is that the behaviour of the ‘same’ large language model can change substantially,” says Lingjiao Chen at Stanford University in California. “It is important …


Source link

The post GPT-4: Is the AI behind ChatGPT getting worse? appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/gpt-4-is-the-ai-behind-chatgpt-getting-worse/feed/ 0
ChatGPT outperforms humans at labelling some data for other AIs https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/ https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/#respond Sun, 11 Jun 2023 12:34:38 +0000 https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/ ChatGPT could be used to help train other AIs Shutterstock / Iryna Imago The artificial intelligence chatbot ChatGPT could automate some aspects of AI training that currently rely on people. The chatbot can accurately classify and label text data used in training AI at a cost of just a third of a cent per work …

The post ChatGPT outperforms humans at labelling some data for other AIs appeared first on Innovation Discoveries.

]]>

ChatGPT

ChatGPT could be used to help train other AIs

Shutterstock / Iryna Imago

The artificial intelligence chatbot ChatGPT could automate some aspects of AI training that currently rely on people. The chatbot can accurately classify and label text data used in training AI at a cost of just a third of a cent per work sample – about 20 times cheaper than crowdsourced human labour.

“We expect that in some tasks ChatGPT could replace humans,” says Fabrizio Gilardi at the University of Zürich in Switzerland. In other tasks, ChatGPT could help reduce the “amount of …


Source link

The post ChatGPT outperforms humans at labelling some data for other AIs appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/feed/ 0
Plagiarism tool gets a ChatGPT detector – some schools don’t want it https://power2innovate.com/plagiarism-tool-gets-a-chatgpt-detector-some-schools-dont-want-it/ https://power2innovate.com/plagiarism-tool-gets-a-chatgpt-detector-some-schools-dont-want-it/#respond Sun, 11 Jun 2023 05:05:38 +0000 https://power2innovate.com/plagiarism-tool-gets-a-chatgpt-detector-some-schools-dont-want-it/ Turnitin’s software is used on the work of millions of students Shutterstock/Rawpixel.com Plagiarism detection software that is already screening the work of more than 62 million students worldwide is getting a major upgrade to detect AI-generated writing – though some universities say they don’t want it. This comes several months after the release of the …

The post Plagiarism tool gets a ChatGPT detector – some schools don’t want it appeared first on Innovation Discoveries.

]]>

School

Turnitin’s software is used on the work of millions of students

Shutterstock/Rawpixel.com

Plagiarism detection software that is already screening the work of more than 62 million students worldwide is getting a major upgrade to detect AI-generated writing – though some universities say they don’t want it. This comes several months after the release of the AI chatbot ChatGPT that can generate entire essays prompted major US school districts and universities around the world to ban its use.

It is unclear if AI-generated writing can be reliably detected. Academic researchers have shown that it …


Source link

The post Plagiarism tool gets a ChatGPT detector – some schools don’t want it appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/plagiarism-tool-gets-a-chatgpt-detector-some-schools-dont-want-it/feed/ 0