AIs Archives - Innovation Discoveries https://power2innovate.com/tag/ais/ Latest Scientific Discoveries in Innovation Wed, 06 Mar 2024 16:46:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://power2innovate.com/wp-content/uploads/2022/07/cropped-news-report-32x32.png AIs Archives - Innovation Discoveries https://power2innovate.com/tag/ais/ 32 32 The surprising promise and profound perils of AIs that fake empathy https://power2innovate.com/the-surprising-promise-and-profound-perils-of-ais-that-fake-empathy/ https://power2innovate.com/the-surprising-promise-and-profound-perils-of-ais-that-fake-empathy/#respond Wed, 06 Mar 2024 16:46:20 +0000 https://power2innovate.com/the-surprising-promise-and-profound-perils-of-ais-that-fake-empathy/ ONE HUNDRED days into the war in Gaza, I was finding it increasingly difficult to read the news. My husband told me it might be time to talk to a therapist. Instead, on a cold winter morning, after having fought back tears reading yet another story of human tragedy, I turned to artificial intelligence. “I’m …

The post The surprising promise and profound perils of AIs that fake empathy appeared first on Innovation Discoveries.

]]>

New Scientist Default Image

ONE HUNDRED days into the war in Gaza, I was finding it increasingly difficult to read the news. My husband told me it might be time to talk to a therapist. Instead, on a cold winter morning, after having fought back tears reading yet another story of human tragedy, I turned to artificial intelligence.

“I’m feeling pretty bummed out about the state of the world,” I typed into ChatGPT. “It’s completely understandable to feel overwhelmed,” it responded, before offering a list of pragmatic advice: limit media exposure, focus on the positive and practise self-care.

I closed the chat. While I was sure I could benefit from doing all of these things, at that moment, I didn’t feel much better.

It might seem strange that AI can even attempt to offer this kind of assistance. But millions of people are already turning to ChatGPT and specialist therapy chatbots, which offer convenient and inexpensive mental health support. Even doctors are purportedly using AI to help craft more empathetic notes to patients.

Some experts say this is a boon. After all, AI, unhindered by embarrassment and burnout, might be able to express empathy more openly and tirelessly than humans. “We praise empathetic AI,” one group of psychology researchers recently wrote.

But others aren’t so sure. Many question the idea that an AI could ever be capable of empathy, and worry about the consequences of people seeking emotional support from machines that can only pretend to care. Some even wonder if the rise of so-called empathetic AI might change the way we conceive of empathy and interact with one another.


Source link

The post The surprising promise and profound perils of AIs that fake empathy appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/the-surprising-promise-and-profound-perils-of-ais-that-fake-empathy/feed/ 0
How will AIs like ChatGPT affect elections this year? https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/ https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/#respond Fri, 01 Mar 2024 15:49:31 +0000 https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/ More than half the world’s population, including India, the US and UK, will have the chance to vote for new governments in 2024 WOJTEK RADWANSKI/AFP via Getty Images THE biggest election year in the history of the world is under way, and we have just had our first glimpse at how artificial intelligence is being …

The post How will AIs like ChatGPT affect elections this year? appeared first on Innovation Discoveries.

]]>

A woman holding several small flags of the United States checks her mobile phone while waiting with other onlookers prior to the arrival of the US President in Warsaw on February 21, 2023. - US President Biden is due to deliver a speech in Warsaw later on February 21 at Royal Warsaw Castle Gardens. (Photo by Wojtek Radwanski / AFP) (Photo by WOJTEK RADWANSKI/AFP via Getty Images)

More than half the world’s population, including India, the US and UK, will have the chance to vote for new governments in 2024

WOJTEK RADWANSKI/AFP via Getty Images

THE biggest election year in the history of the world is under way, and we have just had our first glimpse at how artificial intelligence is being used by the shadowy world of government-backed hackers. These groups could have significant impacts on democratic processes through hacking, disinformation or leaking sensitive information. In February, Microsoft and OpenAI published a blog in which they identified groups affiliated with…


Source link

The post How will AIs like ChatGPT affect elections this year? appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/how-will-ais-like-chatgpt-affect-elections-this-year/feed/ 0
AIs get better at maths if you tell them to pretend to be in Star Trek https://power2innovate.com/ais-get-better-at-maths-if-you-tell-them-to-pretend-to-be-in-star-trek/ https://power2innovate.com/ais-get-better-at-maths-if-you-tell-them-to-pretend-to-be-in-star-trek/#respond Thu, 29 Feb 2024 08:07:24 +0000 https://power2innovate.com/ais-get-better-at-maths-if-you-tell-them-to-pretend-to-be-in-star-trek/ Chatbots vary their answers depending on the exact wording used to prompt them, and now it seems that asking an AI to answer as if it were a Star Trek captain boosts its mathematical ability Source link

The post AIs get better at maths if you tell them to pretend to be in Star Trek appeared first on Innovation Discoveries.

]]>

Chatbots vary their answers depending on the exact wording used to prompt them, and now it seems that asking an AI to answer as if it were a Star Trek captain boosts its mathematical ability


Source link

The post AIs get better at maths if you tell them to pretend to be in Star Trek appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/ais-get-better-at-maths-if-you-tell-them-to-pretend-to-be-in-star-trek/feed/ 0
AIs can trick each other into doing things they aren’t supposed to https://power2innovate.com/ais-can-trick-each-other-into-doing-things-they-arent-supposed-to/ https://power2innovate.com/ais-can-trick-each-other-into-doing-things-they-arent-supposed-to/#respond Fri, 24 Nov 2023 11:28:00 +0000 https://power2innovate.com/ais-can-trick-each-other-into-doing-things-they-arent-supposed-to/ We don’t fully understand how large language models work Jamie Jin/Shutterstock AI models can trick each other into disobeying their creators and providing banned instructions for making methamphetamine, building a bomb or laundering money, suggesting that the problem of preventing such AI “jailbreaks” is more difficult than it seems. Many publicly available large language models …

The post AIs can trick each other into doing things they aren’t supposed to appeared first on Innovation Discoveries.

]]>

We don’t fully understand how large language models work

Jamie Jin/Shutterstock

AI models can trick each other into disobeying their creators and providing banned instructions for making methamphetamine, building a bomb or laundering money, suggesting that the problem of preventing such AI “jailbreaks” is more difficult than it seems.

Many publicly available large language models (LLMs), such as ChatGPT, have hard-coded rules that aim to prevent them from exhibiting racist or sexist bias, or answering questions with illegal or problematic answers – things they have learned to do from humans via training…


Source link

The post AIs can trick each other into doing things they aren’t supposed to appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/ais-can-trick-each-other-into-doing-things-they-arent-supposed-to/feed/ 0
AIs can guess where Reddit users live and how much they earn https://power2innovate.com/ais-can-guess-where-reddit-users-live-and-how-much-they-earn/ https://power2innovate.com/ais-can-guess-where-reddit-users-live-and-how-much-they-earn/#respond Wed, 01 Nov 2023 08:40:44 +0000 https://power2innovate.com/ais-can-guess-where-reddit-users-live-and-how-much-they-earn/ We may reveal more of ourselves on the internet than we realise Brain light / Alamy Large language models (LLMs) like GPT-4 can identify a person’s age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on social media. Robin Staab and Mark Vero at ETH Zurich in …

The post AIs can guess where Reddit users live and how much they earn appeared first on Innovation Discoveries.

]]>

We may reveal more of ourselves on the internet than we realise

Brain light / Alamy

Large language models (LLMs) like GPT-4 can identify a person’s age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on social media.

Robin Staab and Mark Vero at ETH Zurich in Switzerland got nine LLMs to pore through a database of Reddit posts and pick up identifying information in the way users wrote.

Staab and Vero randomly selected 1500 profiles of users who engaged on the platform, then narrowed these down to 520 users for which they could confidently identify attributes like a person’s place of birth, their income bracket, gender and location, either in their profiles or posts.

When given the posting history of those users, some of the LLMs were able to identify many of these attributes with a high degree of accuracy. GPT-4 achieved the highest overall accuracy with 85 per cent, while LlaMA-2-7b, a comparatively low-powered LLM, was the least accurate model with 51 per cent.

“It tells us that we give a lot of our personal information away on the internet without thinking about it,” says Staab. “Many people would not assume that you can directly infer their age or their location from how they write, but LLMs are quite capable.”

Sometimes, personal details were explicitly stated in the posts. For example, some users post their income in forums about financial advice. But the AIs also picked up on subtler cues, like location-specific slang, and could estimate a salary range from a user’s profession and location.

Some characteristics were easier for the AIs to discern than others. GPT-4 was 97.8 per cent accurate at guessing gender, but only 62.5 per cent accurate on income.

“We’re only just beginning to understand how privacy might be affected by use of LLMs,” says Alan Woodward, at the University of Surrey, UK.

Topics:


Source link

The post AIs can guess where Reddit users live and how much they earn appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/ais-can-guess-where-reddit-users-live-and-how-much-they-earn/feed/ 0
Should we be worried about AI’s growing energy use? https://power2innovate.com/should-we-be-worried-about-ais-growing-energy-use/ https://power2innovate.com/should-we-be-worried-about-ais-growing-energy-use/#respond Tue, 10 Oct 2023 22:28:24 +0000 https://power2innovate.com/should-we-be-worried-about-ais-growing-energy-use/ Most AIs are run on servers made by Nvidia, which are packed with power-hungry GPU chips Associated Press / Alamy Amid the many debates about the potential dangers of artificial intelligence, some researchers argue that an important concern is being overlooked: the energy used by computers to train and run large AI models. Alex de …

The post Should we be worried about AI’s growing energy use? appeared first on Innovation Discoveries.

]]>

Most AIs are run on servers made by Nvidia, which are packed with power-hungry GPU chips

Associated Press / Alamy

Amid the many debates about the potential dangers of artificial intelligence, some researchers argue that an important concern is being overlooked: the energy used by computers to train and run large AI models.

Alex de Vries at the VU Amsterdam School of Business and Economics warns that AI’s growth is poised to make it a significant contributor to global carbon emissions. He estimates that if Google switched its whole search business to AI, it would end up using 29.3 terawatt hours per year – equivalent to the electricity consumption of Ireland, and almost double the company’s total energy consumption of 15.4 terawatt hours in 2020. Google didn’t respond to a request for comment.

On one hand, there is good reason not to panic. Making that sort of switch is practically impossible, as it would require more than 4 million powerful computer chips known as graphics processing units (GPUs) that are currently in huge demand, with limited supply. This would cost $100 billion, which even Google’s deep pockets would struggle to fund.

On the other hand, in time, AI’s energy consumption will present a genuine problem. Nvidia, which sells 95 per cent of the GPUs used for AI, will ship 100,000 of its A100 servers this year, which can collectively consume 5.7 terrawatt hours a year.

Things could, and probably will, get much worse in time as new manufacturing plants come online and dramatically increase production capacity. Chip maker TSMC, which supplies Nvidia, is investing in new factories that could provide 1.5 million servers a year by 2027, and all that hardware could consume 85.4 terawatt hours of energy a year, says de Vries.

With businesses rushing to integrate AI into all sorts of products, Nvidia will probably have no problem clearing its stock. But de Vries says it is important for AI to be used sparingly, given its high environmental cost.

“People have this new tool and they’re like, ‘OK, that’s great, we’re gonna use it’, without regard for whether they actually need it,” he says. “They forget to ask or wonder if the end user even has a need for this in some way or will it make their lives better. And I think that disconnect is ultimately the real problem.”

Sandra Wachter at the University of Oxford says consumers should be aware that playing with these models has a cost. “It’s one of the topics that really keeps me up at night,” says Wachter. “We just interact with the technology and we’re not actually aware of how much resources – electricity, water, space – it takes.” Legislation to force transparency about the models’ environmental impact would push companies to act more responsibly, she says.

A spokesperson for OpenAI, the developer of ChatGPT, tells New Scientist: “We recognise training large models can be energy-intensive and is one of the reasons we are constantly working to improve efficiencies. We give considerable thought about the best use of our computing power.”

There are signs that smaller AI models are now approaching the capabilities of larger ones, which could bring significant energy savings, says Thomas Wolf, co-founder of AI company Hugging Face. Mistral 7B and Meta’s Llama 2 are 10 to 100 times smaller than GPT4, the AI behind ChatGPT, and can do many of the same things, he says. “Not everyone needs GPT4 for everything, just like you don’t need a Ferrari to go to work.”

A Nvidia spokesperson says running AI on its GPUs is more energy-efficient than on an alternative type of chip called a CPU. “Accelerated computing on Nvidia technology is the most energy-efficient computing model for AI and other data centre workloads,” they say. “Our products are more performant and energy efficient with each new generation.”

Topics:


Source link

The post Should we be worried about AI’s growing energy use? appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/should-we-be-worried-about-ais-growing-energy-use/feed/ 0
Multilingual AIs are better at responding to queries in English https://power2innovate.com/multilingual-ais-are-better-at-responding-to-queries-in-english/ https://power2innovate.com/multilingual-ais-are-better-at-responding-to-queries-in-english/#respond Wed, 16 Aug 2023 00:56:18 +0000 https://power2innovate.com/multilingual-ais-are-better-at-responding-to-queries-in-english/ AI chatbots can respond to many different queries, though not always accurately iStockphoto Copyright: Thanumporn Thongkongkaew/Getty Images Multilingual large language models (LLMs) seem to work better in English. These AIs are designed to respond to queries in multiple languages but they respond better if asked to translate the request into English first. LLMs have become …

The post Multilingual AIs are better at responding to queries in English appeared first on Innovation Discoveries.

]]>

AI chatbots can respond to many different queries, though not always accurately

AI chatbots can respond to many different queries, though not always accurately

iStockphoto Copyright: Thanumporn Thongkongkaew/Getty Images

Multilingual large language models (LLMs) seem to work better in English. These AIs are designed to respond to queries in multiple languages but they respond better if asked to translate the request into English first.

LLMs have become a key part of the artificial intelligence revolution since the release of ChatGPT by OpenAI in November 2022. But interacting with them is primarily done in English – an issue some developers have tried to overcome with the release of multilingual …


Source link

The post Multilingual AIs are better at responding to queries in English appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/multilingual-ais-are-better-at-responding-to-queries-in-english/feed/ 0
AIs trained on AI-generated images produce glitches and blurs https://power2innovate.com/ais-trained-on-ai-generated-images-produce-glitches-and-blurs/ https://power2innovate.com/ais-trained-on-ai-generated-images-produce-glitches-and-blurs/#respond Tue, 18 Jul 2023 14:17:55 +0000 https://power2innovate.com/ais-trained-on-ai-generated-images-produce-glitches-and-blurs/ AI images get blurrier and less realistic if AIs are trained on AI-generated data examples Rice University As the internet fills up with AI-created images of human faces and strange cat portraits, there is the growing danger of creating a self-consuming loop that consists of generative AIs mainly training on their own synthetic images. That …

The post AIs trained on AI-generated images produce glitches and blurs appeared first on Innovation Discoveries.

]]>

AI images get blurrier and less realistic if AIs are trained on AI-generated data examples

Rice University

As the internet fills up with AI-created images of human faces and strange cat portraits, there is the growing danger of creating a self-consuming loop that consists of generative AIs mainly training on their own synthetic images. That could lead to huge drops in either the quality or diversity of these images.

The phenomenon will challenge all but the largest tech companies that can afford to filter AI training data sets scraped from the internet. “There’s …


Source link

The post AIs trained on AI-generated images produce glitches and blurs appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/ais-trained-on-ai-generated-images-produce-glitches-and-blurs/feed/ 0
AIs will become useless if they keep learning from other AIs https://power2innovate.com/ais-will-become-useless-if-they-keep-learning-from-other-ais/ https://power2innovate.com/ais-will-become-useless-if-they-keep-learning-from-other-ais/#respond Fri, 16 Jun 2023 20:41:55 +0000 https://power2innovate.com/ais-will-become-useless-if-they-keep-learning-from-other-ais/ Chatbots use statistical models of human language to predict what words should come next Laurence Dutton/Getty Images Artificial intelligences that are trained using text and images from other AIs, which have themselves been trained on AI outputs, could eventually become functionally useless. AIs such as ChatGPT, known as large language models (LLMs), use vast repositories …

The post AIs will become useless if they keep learning from other AIs appeared first on Innovation Discoveries.

]]>

Chatbots use statistical models of human language to predict what words should come next

Laurence Dutton/Getty Images

Artificial intelligences that are trained using text and images from other AIs, which have themselves been trained on AI outputs, could eventually become functionally useless.

AIs such as ChatGPT, known as large language models (LLMs), use vast repositories of human-written text from the internet to create a statistical model of human language, so that they can predict which words are most likely to come next in a sentence. Since they have been available, the internet has become awash with AI-generated text, but the effect this will have on future AIs is unclear.

Now, Ilia Shumailov at the University of Oxford and his colleagues have found that AI models trained using the outputs of other AIs become heavily biased, overly simple and disconnected from reality – a problem they call model collapse.

This failure happens because of the way that AI models statistically represent text. An AI that sees a phrase or sentence many times will be likely to repeat this phrase in an output, and less likely to produce something it has rarely seen. When new models are then trained on text from other AIs, they see only a small fraction of the original AI’s possible outputs. This subset is unlikely to contain rarer outputs and so the new AI won’t factor them into its own possible outputs.

The model also has no way of telling whether the AI-generated text it sees corresponds to reality, which could introduce even more misinformation than current models.

A lack of sufficiently diverse training data is compounded by deficiencies in the models themselves and the way they are trained, which don’t always perfectly represent the underlying data in the first place. Shumailov and his team showed that this results in model collapse for a variety of different AI models. “As this process is repeating, ultimately we are converging into this state of madness where it’s just errors, errors and errors, and the magnitude of errors are much higher than anything else,” says Shumailov.

How quickly this process happens depends on the amount of AI-generated content in an AI’s training data and what kind of model it uses, but all models exposed to AI data appear to collapse eventually.

The only way to get around this would be to label and exclude the AI-generated outputs, says Shumailov. But this is impossible to do reliably, unless you own an interface where humans are known to enter text, such as Google or OpenAI’s ChatGPT interface — a dynamic that could entrench the already significant financial and computational advantages of big tech companies.

Some of the errors might be mitigated by instructing AIs to give preference to training data from before AI content flooded the web, says Vinu Sadasivan at the University of Maryland.

It is also possible that humans won’t post AI content to the internet without editing it themselves first, says Florian Tramèr at the Swiss Federal Institute of Technology in Zurich. “Even if the LLM in itself is biased in some ways, the human prompting and filtering process might mitigate this to make the final outputs be closer to the original human bias,” he says.

Topics:


Source link

The post AIs will become useless if they keep learning from other AIs appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/ais-will-become-useless-if-they-keep-learning-from-other-ais/feed/ 0
ChatGPT outperforms humans at labelling some data for other AIs https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/ https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/#respond Sun, 11 Jun 2023 12:34:38 +0000 https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/ ChatGPT could be used to help train other AIs Shutterstock / Iryna Imago The artificial intelligence chatbot ChatGPT could automate some aspects of AI training that currently rely on people. The chatbot can accurately classify and label text data used in training AI at a cost of just a third of a cent per work …

The post ChatGPT outperforms humans at labelling some data for other AIs appeared first on Innovation Discoveries.

]]>

ChatGPT

ChatGPT could be used to help train other AIs

Shutterstock / Iryna Imago

The artificial intelligence chatbot ChatGPT could automate some aspects of AI training that currently rely on people. The chatbot can accurately classify and label text data used in training AI at a cost of just a third of a cent per work sample – about 20 times cheaper than crowdsourced human labour.

“We expect that in some tasks ChatGPT could replace humans,” says Fabrizio Gilardi at the University of Zürich in Switzerland. In other tasks, ChatGPT could help reduce the “amount of …


Source link

The post ChatGPT outperforms humans at labelling some data for other AIs appeared first on Innovation Discoveries.

]]>
https://power2innovate.com/chatgpt-outperforms-humans-at-labelling-some-data-for-other-ais/feed/ 0