GenAIPro https://www.webpronews.com/emergingtech/genaipro/ Breaking News in Tech, Search, Social, & Business Mon, 09 Sep 2024 23:12:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 GenAIPro https://www.webpronews.com/emergingtech/genaipro/ 32 32 138578674 Apple Intelligence to Begin Rolling Out Next Month https://www.webpronews.com/apple-intelligence-to-begin-rolling-out-next-month/ Mon, 09 Sep 2024 23:12:15 +0000 https://www.webpronews.com/?p=607764 Apple kicked off the “It’s Glowtime” event launching new hardware and providing a definitive update on its Apple Intelligence plans.

Apple Intelligence is the company’s “personal intelligence system that combines the power of generative models with personal context.” Since the company first demoed Apple Intelligence, it has provided one of the greatest demonstrations of the day-to-day value of generative AI systems for the average user.

Reports had surfaced as early as late July that Apple Intelligence would debut with iOS 18.1, not 18.0. Monday’s event helped provide a concrete timeline for when users can expect to get there hands on the tech.

Today, Apple announced that Apple Intelligence, the personal intelligence system that combines the power of generative models with personal context to deliver intelligence that is incredibly useful and relevant, will start rolling out next month with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, with more features launching in the coming months. In addition, Apple introduced the new iPhone 16 lineup, built from the ground up for Apple Intelligence and featuring the faster, more efficient A18 and A18 Pro chips — making these the most advanced and capable iPhone models ever.

Apple Intelligence first launches in U.S. English, and will quickly expand to include localized English in Australia, Canada, New Zealand, South Africa, and the U.K. in December, with additional language support — such as Chinese, French, Japanese, and Spanish — coming next year.

Apple goes on to reiterate the benefits users can expect from Apple Intelligence.

With Writing Tools, users can refine their words by rewriting, proofreading, and summarizing text nearly everywhere they write, including Mail, Notes, Pages, and third-party apps.

In Photos, the Memories feature now enables users to create the movies they want to see by simply typing a description. In addition, natural language can be used to search for specific photos, and search in videos gets more powerful with the ability to find specific moments in clips. The new Clean Up tool can identify and remove distracting objects in the background of a photo — without accidentally altering the subject.

In the Notes and Phone apps, users can record, transcribe, and summarize audio. When a recording is initiated while on a call in the Phone app, participants are automatically notified, and once the call ends, Apple Intelligence also generates a summary to help recall key points.

The company emphasizes its privacy-first approach, with many of the models running locally on-device.

Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia, harnessing the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks — all while protecting users’ privacy and security. Many of the models that power Apple Intelligence run entirely on device, and Private Cloud Compute offers the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers.

]]>
607764
AI’s Content Grab: Are Companies Crossing the Line with Copyrighted Material? https://www.webpronews.com/ais-content-grab-are-companies-crossing-the-line-with-copyrighted-material/ Sat, 07 Sep 2024 08:53:28 +0000 https://www.webpronews.com/?p=607631 Artificial intelligence (AI) has rapidly become one of the most transformative technologies of the 21st century, reshaping industries from healthcare to entertainment. But behind the excitement lies a growing controversy over how AI companies are acquiring and using content to train their models. Specifically, many are using copyrighted material without permission, raising legal and ethical questions about whether this practice can be considered “fair use.” As AI-generated content floods the market, stakeholders—from artists to tech companies—are debating the implications of this practice and what it means for creators, companies, and the future of intellectual property.

The Unfolding Crisis: AI Training on Copyrighted Content

AI’s dependence on vast datasets to learn how to perform tasks like generating text, images, and videos has sparked concerns over how companies are acquiring that data. For instance, the viral AI video startup Viggle recently admitted to training its models on YouTube videos without explicit permission. Viggle is not alone. Major players such as NVIDIA and Anthropic are facing similar accusations.

https://twitter.com/ViggleAI/status/1832114003562394013

“YouTube’s CEO has called it a ‘clear violation’ of their terms,” explains Mike Kaput, Chief Content Officer at the Marketing AI Institute. “Yet most AI companies are doing it, betting on a simple strategy: Take copyrighted content, hope nobody notices, and if you succeed, hire lawyers.” This has become a common approach in the rapidly developing AI sector, as companies rush to build more powerful models, often without securing proper licenses.

The underlying issue is the use of copyrighted material—often created by individual content creators or large media companies—without any compensation or acknowledgment. In Kaput’s view, this strategy banks on the public’s indifference: “Most people see cool AI videos and think: ‘Wow, that’s amazing!’ They don’t ask: ‘Wait, how was this trained?’”

Is This Fair Use or a Copyright Violation?

The heart of the debate lies in how copyright law defines “fair use,” a legal doctrine that allows limited use of copyrighted material without permission, usually for purposes such as criticism, comment, news reporting, teaching, or research. But does AI training fall under this category?


“It hinges on a key distinction in copyright law: whether a work is transformative or derivative,” says Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai. He explains that if AI-generated content is seen as transformative—meaning it adds new expression or meaning to the original work—it may be protected under fair use. However, if it is deemed derivative, merely replicating the original content, it could violate copyright laws.

“In the EU, regulators have said using copyrighted data for training without permission infringes on the copyright owner’s rights,” Penn continues. “In Japan and China, regulators have taken the opposite stance, saying the model is in no way the original work, and thus does not infringe.”

This leads to a critical question: Is the legal responsibility on the tool (the AI itself) or the user who generates content with it? “Only resolved court cases will tell,” Penn concludes.

The Public’s Indifference: Do People Care?

While the legal community is wrestling with these issues, the broader public seems largely disengaged from the debate. Justin C., co-founder of Neesh.AI, suggests that the average person is indifferent to AI’s data practices. “Most people feel like it’s out of their control,” he says. “They aren’t paying attention because it doesn’t directly affect them.” This lack of awareness means that AI companies have little fear of public backlash, as long as they continue delivering impressive products.

Similarly, Paul Guds, an AI management consultant, believes that the momentum behind AI development is too strong to stop. “The gains for the public outweigh the potential costs,” he argues. “Regulation on this matter will take years, and litigation will be costly and lengthy. In the end, this train cannot be stopped, worst case, it will be slowed down slightly.”

However, some believe this complacency could come with significant costs. “It feels a lot like Uber when it started,” says Melissa Kolbe, an AI and marketing strategist. “Just worry about the consequences later. The public doesn’t really care—unless it’s their own video.”

The Artistic Backlash: Protecting Creativity

While many in the tech community view AI as a tool for innovation, artists and creators feel differently. For them, the unchecked use of their work for AI training represents a threat to their livelihoods and the integrity of creative expression.

“The only people that really care about this are genuine artists,” says Jim Woolfe, an electronic musician. “The problem is that it’s become harder to tell the difference between real and generated content, and true creativity is in danger of being drowned out by bland, AI-generated art.” Woolfe predicts a backlash as more artists realize the scope of what’s at stake.

Others agree that AI could erode the value of original content. “It’s already harder to make a living as a creator,” says Reggie Johnson, a communication strategist. “Now, Big Tech companies are using copyrighted content to train AI without permission, and the government seems to be letting them get away with it.” Johnson points to the recent rejection of the Internet Archive’s appeal, a case that has sparked debate about whether AI companies are playing by a different set of rules than other industries.

Legal Implications: Can Copyright Law Keep Up?

The rapid pace of AI innovation is exposing gaps in current copyright laws. “Laws around copyright are already out of date,” says Doug V., a digital strategist. “With AI using content without permission or attribution, it’s a very complicated knot to unravel.” He anticipates that companies will begin inserting clauses into their terms and conditions, effectively requiring users to waive rights to their content for AI training purposes. “What artist will willingly upload their creations to social media if they’re effectively giving it all away for others to make derivatives of their work?” Doug asks.

This concern is echoed by Elizabeth Shaw, an AI strategy director, who suggests that the issue may soon become a hot topic in AI policy discussions. “Are we teasing an upcoming panel on this at MAICON?” she asks, referencing the Marketing Artificial Intelligence Conference.

The Future of AI and Copyright: What Comes Next?

As AI continues to evolve, the questions surrounding the use of copyrighted material will become more pressing. Some predict that regulation is inevitable, but it will take years to catch up. “I don’t think there’s a way to stop it,” Kaput admits. “Pandora’s box is already open.”

However, others believe the issue will come to a head sooner rather than later. “I predict a movement will rise that values real art over AI-generated content,” says Woolfe. “Once people realize what’s at stake, there will be a backlash.”

For now, the debate over whether AI companies can freely use copyrighted content for training remains unresolved. As courts begin to take on these cases, the line between fair use and infringement will continue to blur, leaving creators, companies, and lawmakers to grapple with the implications of AI’s rapid advancement.

In the meantime, it’s clear that AI is not just a technological innovation—it’s a legal and ethical minefield. As Guds puts it, “We’re trending toward falling off the slippery slope. The question is: how do we stop it?”

]]>
607631
Prepare for $2,000 ChatGPT Subscriptions https://www.webpronews.com/2000-chatgpt-subscriptions/ Fri, 06 Sep 2024 00:01:09 +0000 https://www.webpronews.com/?p=607565 ChatGPT fans may be in for a rude awakening, with OpenAI reportedly investigating subscription options that could be as high as $2,000.

According to Financial Times, OpenAI executives are trying to find the subscription sweet spot, one where the company can make money off of its AI models, yet still drive subscriber growth with a price point customers will accept. Unfortunately, FT reports that $2,000 subscription fees are being discussed, although nothing has been decided.

OpenAI’s subscription dilemma is indicative of the challenges the AI industry is facing in general. Financial firms and investors have increasingly been sounding the alarm over the high price tag that comes with generative AI development.

In fact, the high cost has been cited as one of the reasons AI could be the tech industry’s latest bubble, rather than a transformative tech that’s here to stay. Jim Covello, Goldman Sachs Head of Global Equity Research, compared AI to earlier tech revolutions, saying its high cost limits its ability to have the same impact.

Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

While it’s hard to imagine that OpenAI will go with a $2,000 subscription, the fact that it is even discussing such a high price underscores the growing pressure OpenAI—and the AI industry in general—is under to start recouping the massive investments that have been made.

]]>
607565
Apple Reportedly Considering An Investment In OpenAI https://www.webpronews.com/apple-reportedly-considering-an-investment-in-openai/ Fri, 30 Aug 2024 00:04:33 +0000 https://www.webpronews.com/?p=607057 Apple is reportedly making a rare move, investigating a potential investment in OpenAI after announcing a deal to include ChatGPT in its products.

Apple took the opportunity to announce its Apple Intelligence features at WWDC 2024. Apple Intelligence is based on Apple’s own intelligence models, but the company also tapped OpenAI’s ChatGPT for more advanced functions. The decision to partner with OpenAI was seen as a major win for the AI firm, especially since Apple turned its back on discussions with Meta and only promised possible integration with Google’s Gemini or Anthropic’s Claude at some future date.

According The Wall Street Journal, Apple is in talks to join a round of investment in OpenAI that is being led by Thrive Capital and also includes Microsoft. The round of investment is reportedly several billion dollars, and would leave OpenAI with at least a $100 billion valuation.

As WSJ points out, Apple rarely invests in startups, amking the reports all the more interesting. Like many companies in the tech industry, Apple clearly sees generative AI as a critical feature, one worth breaking with its historical pattern.

]]>
607057
Grok-2: Revolutionizing AI or Just More of the Same? A Deep Dive into the Latest Large Language Model https://www.webpronews.com/grok-2-revolutionizing-ai-or-just-more-of-the-same-a-deep-dive-into-the-latest-large-language-modelthe-rapid-evolution-of-artificial-intelligence-ai-has-brought-about-a-constant-influx-of-new-mode/ Fri, 23 Aug 2024 23:40:08 +0000 https://www.webpronews.com/?p=606769

The rapid evolution of artificial intelligence (AI) has brought about a constant influx of new models and technologies, each promising to push the boundaries of what machines can achieve. Among these developments, Grok-2, the latest large language model (LLM) from xAI, stands out as both a potential game-changer and a source of controversy. Unlike its predecessors, Grok-2 arrived with little fanfare—no accompanying research paper, no detailed model card, and no formal academic endorsement. This mysterious launch has fueled a mixture of excitement and skepticism within the AI community, raising important questions about the future direction of AI development.

The Silent Debut of Grok-2

In the world of AI, new models are typically introduced with extensive documentation, including research papers that detail the architecture, training methods, and benchmarks of the model. Grok-2, however, broke from this tradition. It was released quietly, with only a basic Twitter chatbot available for public interaction. This lack of transparency has left many AI researchers puzzled and concerned. As one AI researcher put it, “It’s unusual, almost unheard of, to release a model of this scale without any academic backing or explanation. It raises questions about the model’s capabilities and the motivations behind its release.”

Despite the unconventional launch, Grok-2 has already demonstrated impressive capabilities. Early tests have shown that it performs well on several key benchmarks, including the Google Proof Science Q&A Benchmark and the MLU Pro, where it ranks second only to Claude 3.5 Sonic. These results suggest that Grok-2 has the potential to compete with the best LLMs currently available. However, the absence of detailed performance metrics and the opaque nature of its release have led to a mix of curiosity and skepticism.

One commenter on the ‘AI Explained’ YouTube channel encapsulated the general sentiment: “No paper? Just a table with benchmarks. What are the performance claims for Grok-2 really based on? Benchmarks have been repeatedly proven meaningless by this point.”

The Scaling Debate: Is Bigger Always Better?

A central topic in the ongoing AI discourse is the concept of scaling—essentially, the idea that increasing the size of a model, in terms of parameters and training data, will lead to better performance. This debate has been reignited by Grok-2 and a recent paper from Epoch AI, which suggests that by 2030, AI models could be scaled up by a factor of 10,000. Such a leap could potentially revolutionize the field, but it also raises significant questions about the path forward.

The Epoch AI paper posits that scaling to such an extent could fundamentally change how models interact with data, allowing them to develop more sophisticated internal models of the world. This idea, known as the development of “world models,” suggests that as LLMs grow, they might begin to understand the world in ways that are more akin to human cognition. This could enable breakthroughs in AI’s ability to reason, plan, and interact with humans on a deeper level.

However, not everyone in the AI community is convinced that scaling alone is the answer. “We’ve seen time and time again that more data and more parameters don’t automatically lead to more intelligent or useful models,” argues one AI critic. “What we need is better data, better training techniques, and more transparency in how these models are built and evaluated.”

This skepticism is echoed by many within the AI community. A user on the ‘AI Explained’ channel commented, “Does anybody really believe that scaling alone will push transformer-based ML up and over the final ridge before the arrival at the mythical summit that is AGI?” This sentiment reflects a broader concern that scaling might not address the fundamental limitations of current AI models.

Testing the Limits: Grok-2’s Early Performance

Given the lack of official documentation, independent AI enthusiasts and researchers have taken it upon themselves to test Grok-2’s capabilities. One such effort is the Simple Bench project, an independent benchmark designed to test the reasoning and problem-solving abilities of LLMs. The creator of Simple Bench, who runs the popular ‘AI Explained’ YouTube channel, has shared preliminary results from testing Grok-2. “Grok-2’s performance was pretty good, mostly in line with the other top models on traditional benchmarks. But it’s not just about scores—it’s about how these models handle more complex, real-world tasks,” he explained.

Simple Bench focuses on tasks that require a model to understand and navigate cause-and-effect relationships, which are often overlooked by traditional benchmarks. While Grok-2 performed well on many tasks, it still fell short in areas where Claude 3.5 Sonic excelled. This discrepancy highlights a key issue in AI development: the challenge of creating models that not only excel in controlled environments but also perform reliably in the unpredictable real world.

One commenter, reflecting on the importance of benchmarks like Simple Bench, stated, “What I like about Simple Bench is that it’s ball-busting. Too many of the recent benchmarks start off at 75-80% on the current models. A bench that last year got 80% and now gets 90% is not as interesting anymore for these kind of bleeding edge discussions on progress.” This comment underscores the need for benchmarks that challenge models to perform beyond what is easily achievable, pushing the boundaries of AI capabilities.

The Future of AI: More Than Just Bigger Models?

As the AI community grapples with the implications of Grok-2 and the broader trend of scaling models, some researchers are exploring alternative paths to advancement. One promising area is the development of models that can create and utilize internal world models. These models would go beyond surface-level pattern recognition, instead developing a deeper understanding of the world’s underlying rules and structures.

Recent experiments have shown that LLMs are beginning to develop these kinds of models, albeit in rudimentary forms. A study referenced in the Simple Bench project found that after training on large datasets, a language model was able to infer hidden relationships and predict outcomes based on incomplete information. “It’s a small step, but it’s a sign that these models are starting to move beyond simple data processing and into something more complex,” said a researcher involved in the study.

However, the path to truly intelligent AI—often referred to as Artificial General Intelligence (AGI)—is still fraught with challenges. Some experts believe that current architectures, like those used in Grok-2, may not be enough to achieve AGI, no matter how much they are scaled. Instead, they argue that a new approach, possibly involving more sophisticated data labeling techniques or even a fundamental shift in how AI models are trained, may be necessary.

One viewer of the ‘AI Explained’ channel suggested that the future of AI might not lie in larger models, but in a fundamental rethinking of how these models are trained. “We need deepfake regulation asap. We can’t count on the startup to do basic, literally basic safeguards, especially with voice cloning. Pretty straightforward to do live voice comparisons via embeddings to validate if it’s your voice. Inexpensive. Without being told too. These companies don’t care about the damage,” they noted, highlighting the ethical challenges that accompany the current trajectory of AI development.

The Ethical Implications: Real-Time Deepfakes and Beyond

As AI models like Grok-2 become more advanced, they also pose new ethical challenges. One of the most pressing concerns is the potential for these models to generate highly convincing deepfakes in real time. Already, tools like Grok-2’s image-generating sibling, Flux, and other AI platforms like Ideogram 2 are capable of creating realistic images and videos. As one AI enthusiast noted, “We’re not far from a world where you won’t be able to trust anything you see online. The line between reality and fabrication is blurring at an alarming rate.”

The potential for misuse is enormous, from spreading misinformation to manipulating public opinion. The possibility of real-time deepfakes could lead to a world where visual and auditory evidence becomes entirely unreliable. As one commenter on the ‘AI Explained’ channel observed, “We are mindlessly hurtling towards a world of noise where nothing can be trusted or makes any sense.” This dystopian vision highlights the urgent need for regulatory frameworks and technological solutions to address the risks posed by AI-generated content.

Some experts are calling for stricter regulations and the development of new technologies to help detect and counteract deepfakes. Demis Hassabis, CEO of Google DeepMind, recently pointed out, “We need to be proactive in addressing these issues. The technology is advancing quickly, and if we’re not careful, it could outpace our ability to control it.”

In response to these concerns, researchers are exploring new methods to verify the authenticity of digital content. One promising approach is the use of zero-knowledge proofs, a cryptographic technique that allows for the verification of information without revealing the information itself. This could potentially be used to create “personhood credentials” that verify the identity of individuals in digital spaces. As one viewer commented, “I have been yelling about zero knowledge proofs for years. They are absolutely required for the next phase of humanity, without exception.”

A Turning Point or Just Another Model?

The debate over Grok-2’s significance is far from settled. Some see it as a step toward a new era of AI-driven innovation, while others view it as just another model in an increasingly crowded field, marked by incremental improvements rather than groundbreaking advancements. As one skeptic on the ‘AI Explained’ channel remarked, “How can we really judge the importance of Grok-2 when there’s no transparency about how it works or what it’s truly capable of? Without that, it’s just another black box.”

Despite these reservations, the release of Grok-2 is undeniably a moment of interest, if not a turning point, in the AI landscape. The model’s capabilities—demonstrated through early benchmark performance—suggest it could play a significant role in shaping future applications of AI. However, this potential is tempered by the ongoing challenges in AI development, particularly around issues of ethics, transparency, and the limits of scaling.

Moreover, the ethical implications of models like Grok-2 cannot be overstated. As AI continues to advance, the line between reality and digital fabrication is becoming increasingly blurred, raising concerns about trust and authenticity in the digital age. The potential for real-time deepfakes, coupled with the model’s capabilities, presents both opportunities and risks that society must grapple with sooner rather than later.

Ultimately, Grok-2’s legacy will depend on how these challenges are addressed. Will the AI community find ways to harness the power of large language models while ensuring they are used responsibly? Or will Grok-2 and its successors become symbols of an era where technological advancement outpaced our ability to manage its consequences?

As we stand at this crossroads, the future of AI remains uncertain. What is clear, however, is that the development of models like Grok-2 is only the beginning. Whether it will lead us into a new era of AI-driven innovation or become just another step in the long journey toward truly intelligent machines is a question that only time—and continued research—will answer.

In the words of one AI enthusiast, “We are at the brink of something monumental, but whether it’s a breakthrough or just another iteration depends on how we proceed from here.” The journey of AI, it seems, is far from over, and Grok-2 might just be one of the many signposts along the way.

]]>
606769
Google Assistant is Old News; Move Over to Gemini Live: The New Face of Conversational AI https://www.webpronews.com/google-assistant-is-old-news-move-over-to-gemini-live-the-new-face-of-conversational-ai/ Fri, 23 Aug 2024 23:21:18 +0000 https://www.webpronews.com/?p=606766 In a world where digital assistants have become as ubiquitous as smartphones, Google has once again upped the ante with its latest innovation, Gemini Live. Launched with much fanfare at the recent “Made by Google” event, Gemini Live promises to revolutionize the way we interact with our devices, offering a conversational experience that feels almost human. But does it live up to the hype, or is it just another tech gimmick? Let’s dive deep into what Gemini Live brings to the table and explore its potential impact on the future of AI-powered personal assistants.

The Rise of Gemini Live

When Google Assistant was first introduced, it was hailed as a groundbreaking innovation. It could set timers, control smart home devices, and provide weather updates with just a simple voice command. However, as technology advanced, the expectations for digital assistants grew, and the limitations of Google Assistant became increasingly apparent. Enter Gemini Live, Google’s latest attempt to stay ahead of the curve in the rapidly evolving world of artificial intelligence.

Sissie Hsiao, Vice President and General Manager of Gemini Experiences and Google Assistant, highlighted the need for a more natural and intuitive AI interaction during the launch event. “With Gemini, we’re reimagining what it means for a personal assistant to be truly helpful,” Hsiao said. “Gemini is evolving to provide AI-powered mobile assistance that offers a new level of help, all while being more natural, conversational, and intuitive.”

What is Gemini Live?

At its core, Gemini Live is a mobile conversational experience that allows users to have free-flowing, natural conversations with their AI assistant. Unlike traditional digital assistants that require specific voice commands, Gemini Live can engage in continuous dialogue without needing to be reactivated after every question. This feature alone sets it apart from its predecessors and competitors, offering a more seamless and human-like interaction.

One of the most striking aspects of Gemini Live is its ability to understand context and provide detailed, thoughtful responses. For instance, when asked about a recent Liverpool football match, Gemini Live not only provided the score but also gave an analysis of the game’s performance. This level of depth and understanding is something that previous digital assistants have struggled to achieve.

A Crash Course on Using Gemini Live

For those new to Gemini Live, getting started is surprisingly simple. The feature comes pre-installed on the Google Pixel 9 and is also available on other devices like the Samsung Galaxy S24 Ultra and Pixel 8 Pro. To activate Gemini Live, users simply need to engage Google Assistant and select the Gemini Live option from the bottom right of the screen. From there, users are prompted to choose from 10 different voices, each with its unique tone and style.

What sets Gemini Live apart is its ability to continue conversations even when users navigate away from the Gemini Live interface. This means you can carry on a conversation with your AI assistant while using other apps on your phone, making it a truly integrated part of your mobile experience.

The Good, the Bad, and the Limitations

While Gemini Live offers a host of impressive features, it’s not without its limitations. One of the most significant drawbacks is that it requires an internet connection to function. Unlike some AI models that can perform tasks offline, Gemini Live operates entirely in the cloud. This reliance on the cloud means that if you’re without an internet connection, Gemini Live won’t be able to assist you.

Additionally, Gemini Live currently lacks access to some of the more personal features that users have come to expect from digital assistants. For example, it cannot access your calendar, emails, or messages, and it doesn’t have the ability to send text messages or make calls. As one early user pointed out, “Gemini Live is great for general information and casual conversation, but when it comes to personal tasks, it’s still playing catch-up.”

Despite these limitations, Gemini Live excels in areas where traditional digital assistants have often fallen short. Its ability to translate languages instantly and provide responses in different languages makes it a valuable tool for global users. Moreover, Gemini Live remembers past conversations, allowing users to pick up where they left off days or even weeks later. This continuity of conversation is a significant leap forward in making AI interactions feel more natural and less transactional.

Gemini Live vs. Competitors: A New Standard?

The launch of Gemini Live comes at a time when AI-powered voice assistants are becoming increasingly sophisticated. OpenAI’s ChatGPT, Apple’s Siri, and Amazon’s Alexa have all made strides in improving their conversational capabilities, but Gemini Live is setting a new standard with its real-time interaction and low latency. According to users, the seamlessness of Gemini Live’s responses makes it feel more like a conversation with a friend rather than a machine.

One user described their experience with Gemini Live as “wild,” noting how impressively the AI handled complex queries and adjusted on the fly. “It’s not just about giving you generic answers; Gemini Live seems to understand the nuance of what you’re asking and responds accordingly,” they said.

However, not everyone is convinced that Gemini Live is ready to dethrone its competitors just yet. In a recent review, tech analyst Richard Priday highlighted some of the challenges he faced during his 24-hour trial with the AI. “While Gemini Live’s conversational abilities are impressive, there were moments when it struggled with accuracy, particularly when providing directions or current news updates,” Priday noted. “It feels like a step in the right direction, but there’s still work to be done.”

The Future of Gemini Live and AI Assistants

As AI continues to evolve, the role of digital assistants like Gemini Live is likely to expand. Google is already working on deeper integrations with its suite of apps, including Gmail, YouTube, and Google Maps, which could make Gemini Live an even more indispensable tool for everyday tasks. The potential for Gemini Live to be integrated into Google’s smart home devices, such as Nest speakers, also opens up new possibilities for voice-activated control of home environments.

But perhaps the most exciting aspect of Gemini Live is its potential to redefine what we expect from AI assistants. By focusing on creating a more conversational, context-aware experience, Google is pushing the boundaries of how we interact with technology. As Hsiao aptly put it, “We’re in the early days of discovering all the ways an AI-powered assistant can be helpful, and Gemini Live is just the beginning.”

Final Thoughts

In its current form, Gemini Live is an impressive step forward in the world of AI-powered digital assistants. Its natural conversational abilities and seamless integration into the mobile experience make it a compelling option for users looking for more than just a basic voice command assistant. However, there are still areas where Gemini Live needs to improve, particularly in its ability to handle personal tasks and provide accurate real-time information.

As AI technology continues to advance, it’s likely that we’ll see even more sophisticated iterations of Gemini Live in the future. For now, though, it’s clear that Google is on the right track in its quest to create a truly helpful personal assistant. Whether you’re a tech enthusiast or just someone looking for a more natural way to interact with your phone, Gemini Live is worth exploring.

And as for the competition? They’d better watch out—because Gemini Live is here, and it’s not just an assistant; it’s a game-changer.

]]>
606766
Gartner The Latest To Warn of Generative AI’s Limitations https://www.webpronews.com/gartner-the-latest-to-warn-of-generative-ais-limitations/ Fri, 23 Aug 2024 14:29:07 +0000 https://www.webpronews.com/?p=606753 Gartner has joined the growing chorus of companies, institutions, and investors warning that artificial intelligence may never realize the promise its proponents hold out.

AI has taken the tech world by storm, with companies large and small investing billions to advance Generative AI (GenAI) models and try to be the first to crack Artificial General Intelligence (AGI), AI that can truly rival human intelligence. Unfortunately for companies, AI development has been plagued with runaway costs; seemingly unfixable hallucinations; legal and ethical issues surrounding copyright and content ownership; and the digital equivalent of mad cow disease.

Financial institutions have increasingly been warning that AI may never live up to the hype, and could represent the latest tech bubble on the verge of bursting. Gartner has now weighed in, joining the chorus of organizations voicing concern over the future of the tech.

In an interview with The Register, Gartner analyst Arun Chandrasekaran said clients were focused on GenAI, investing billions to achieve. Nonetheless, such organizations were about to experience the “trough of disillusionment.”

“The expectations and hype around GenAI are enormously high,” Chandrasekaran added. “So it’s not that the technology, per se, is bad, but it’s unable to keep up with the high expectations that I think enterprises have because of the enormous hype that’s been created in the market in the last 12 to 18 months.”

Chandrasekaran says there is a place for GenAI, but that the hype surrounding it has oversold what it can do, especially in the short-term.

“I truly still believe that the long-term impact of GenAI is going to be quite significant, but we may have overestimated, in some sense, what it can do in the near term.”

Because of the issue with GenAI, including “no robust solution to hallucinations,” “modest lasting corporate adoption,” and “modest profits,” Chandrasekaran believes the seemingly unlimited hose of cash being directed at GenAI will soon dry up.

“To be sure, Generative AI itself won’t disappear,” Chandrasekaran explained. “But investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts. Companies that are currently valued at billions of dollars may sold, or stripped for parts.”

Others Have Voiced Similar Concerns

Chandrasekaran’s observations echo those of Goldman Sachs Head of Global Equity Research Jim Covello. In a report in July, Covello pointed out the cost difference between AI and previous disruptive technologies, like the Internet. In past cases, low-cost tech was disrupting far more costly existing solutions. In the case of AI, it’s the exact opposite, with one of the most expensive technologies in history trying to disrupt much cheaper options.

Covello makes the case that AI’s ability is nowhere near where it needs to be to justify that cost:

Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

AI’s day of reckoning may be fast approaching, with the companies pushing to develop it forced to justify its cost or come up with some other way to continue funding its development.

]]>
606753
Anthropic The Latest AI Firm Sued For Copyright Infringement https://www.webpronews.com/anthropic-the-latest-ai-firm-sued-for-copyright-infringement/ Wed, 21 Aug 2024 17:39:16 +0000 https://www.webpronews.com/?p=606712 Anthropic is the latest AI firm to be sued for copyright infringement, with three authors filing a class-action lawsuit in California.

AI models are notorious for scraping the web, devouring content in an effort to learn and improve. Unfortunately, the practice has raised a myriad of legal and ethical questions regarding copyright and ownership of content. OpenAI and Microsoft have already been sued, and Perplexity AI has faced allegations of ignoring the Robots Exclusion Protocol and scraping websites without permission.

Anthropic now joins the list of AI companies facing legal consequences related to allegations of unauthorized use of copyrighted material. Writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson have filed a class-action lawsuit, saying Anthropic trained its AI models on their books, as well as those of other writers.

The complaint accuses Anthropic of blatant theft.

Anthropic has built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books. Rather than obtaining permission and paying a fair price for the creations it exploits, Anthropic pirated them. Authors spend years conceiving, writing, and pursuing publication of their copyrighted material. The United States Constitution recognizes the fundamental principle that creators deserve compensation for their work. Yet Anthropic ignored copyright protections. An essential component of Anthropic’s business model—and its flagship “Claude” family of large language models (or “LLMs”)—is the largescale theft of copyrighted works

The plaintiffs go on to accuse Anthropic of knowingly using pirated copies of the writers’ works, and then going to great lengths to hide the extent of its actions.

Plaintiffs are authors of an array of works of fiction and nonfiction. They bring this action under the Copyright Act to redress the harm caused by Anthropic’s brazen infringement. Anthropic downloaded known pirated versions of Plaintiffs’ works, made copies of them, and fed these pirated copies into its models. Anthropic took these drastic steps to help computer algorithms generate human-like text responses.

Anthropic has not even attempted to compensate Plaintiffs for the use of their material. In fact, Anthropic has taken multiple steps to hide the full extent of its copyright theft. Copyright law prohibits what Anthropic has done here: downloading and copying hundreds of thousands of copyrighted books taken from pirated and illegal websites.

The plaintiffs then make the case that Anthropic would not have a commercially successful product if it hadn’t stolen their work, and the work of countless others, in its efforts to improve its AI models.

Anthropic’s immense success is a direct result of its copyright infringement. The quality of Claude, or any LLM, is a consequence of the quality of the data used to train it. The more high-quality, longform text on which an LLM is trained, the more adept an LLM will be in generating lifelike, complex, and useful text responses to prompts. Without usurping the works of Plaintiffs and the members of the Class to train its LLMs to begin with, Anthropic would not have a commercial product with which to damage the market for authors’ works. Anthropic has enjoyed enormous financial gain from its exploitation of copyrighted material. Anthropic projects it will generate more than $850 million of revenue in 2024. After ten rounds of funding, Anthropic has raised $7.6 billion from tech giants like Amazon and Google. As December 2023, these investments valued the company in excess of $18 billion and is likely even higher today.

Anthropic has gone to great lengths to position itself as the responsible AI firm, one focused on using AI for the betterment of humanity, while developing it in a responsible manner that directly addresses concerns about the damage AI may do. The plaintiffs take issue with that stand, pointing out that Anthropic’s alleged behavior would be in direct conflict with those goals.

Anthropic’s commercial gain has come at the expense of creators and rightsholders, including Plaintiffs and members of the Class. Book readers typically purchase books. Anthropic did not even take that basic and insufficient step. Anthropic never sought—let alone paid for—a license to copy and exploit the protected expression contained in the copyrighted works fed into its models. Instead, Anthropic did what any teenager could tell you is illegal. It intentionally downloaded known pirated copies of books from the internet, made unlicensed copies of them, and then used those unlicensed copies to digest and analyze the copyrighted expression-all for its own commercial gain. The end result is a model built on the work of thousands of authors, meant to mimic the syntax, style, and themes of the copyrighted works on which it was trained.

Anthropic styles itself as a public benefit company, designed to improve humanity. In the words of its co-founder Dario Amodei, Anthropic is “a company that’s focused on public benefit.” For holders of copyrighted works, however, Anthropic already has wrought mass destruction. It is not consistent with core human values or the public benefit to download hundreds of thousands of books from a known illegal source. Anthropic has attempted to steal the fire of Prometheus. It is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.

This is not the first time Anthropic has been criticized for its content scraping activities. Electronics repair site iFixit recently called the company out for hitting its “servers a million times in 24 hours.” In response, other companies and organizations described similar behavior.

If Anthropic truly wants to set itself apart as a responsible AI firm, it needs to do a better job in how it approaches issues of copyright, web scraping, and AI model training.

]]>
606712
4 GenAI Risks for Businesses https://www.webpronews.com/ai-security-posture-management/ Tue, 20 Aug 2024 20:51:27 +0000 https://www.webpronews.com/?p=606672 GenAI (generative artificial intelligence) can be a powerful business tool. Adopting this technology enables you to automate repetitive tasks, letting your staff focus on more revenue-driven activities. GenAI facilitates your company’s fast and personalized responses to customer inquiries and ensures data-driven business decisions. 

While GenAI has become increasingly popular in business, its adoption comes with a certain level of risk. Understanding the threats of adopting GenAI in your company and the necessary preventive measures can help safeguard your business. Discussed below are four genAI risks for businesses.

  1. Privacy and data security

GenAI solutions usually need access to significant data amounts to train and generate diverse, top-quality output. The data might contain confidential or sensitive details that may be leaked, exposed, stolen, or misused by malicious actors. While ensuring the security and privacy of the data used to train and engage with these genAI tools isn’t easy, implementing the right data protection tactics can be helpful. With AI security posture management solutions, you can boost data security and privacy by:

  • Swiftly and correctly identifying privacy and data security risks
  • Analyzing data context and content to facilitate decision-making
  • Remediating risks before malicious actors can exploit them

This prioritizes transparency regarding data access, usage, and storage so businesses can evaluate their data security and privacy to spot vulnerabilities and institute measures to reduce risk efficiently.

  1. Fraud and identity risks

GenAI presents fraud risks via:

  • Synthetic identity fraud: It involves using AI to generate fake biometric data and fabricate identification documents, allowing fraudsters to establish counterfeit identities for financial crimes   
  • Sophisticated AI-generated content: Fraudsters use AI tools to generate social engineering attacks and phishing emails to launch sophisticated attacks, such as impersonating customer care representatives to acquire sensitive data from victims
  • Deepfakes: They’re a powerful identity theft tool whose risks include manipulation of facial recognition systems and fraudsters making fake documents

Machine learning methods, including behavioral biometrics, risk scoring, and more, can be used for real-time transaction tracking and fraud prevention. They can help:

  • Catch unusual patterns
  • Analyze the possibility of fraud
  • Unearth fraudulent actor networks
  • Analyze customer behaviors and transaction patterns
  1. Misinformation

GenAI tools can create video, audio, or text content that seems credible, but it isn’t. Since these solutions are trained to use large language models (LLMs), they can generate convincing content based on predictions instead of facts. The use of genAI in business increases the possibility of hallucinations, which arise when these tools misrepresent information due to inadequate training data. These hallucinations pose a major threat to businesses depending on GenAI tools for content creation, decision-making, or research.

When misleading or false data is presented as facts and employees or clients act on it unknowingly, it can result in legal liabilities and reputational damage. Implementing guardrails can help ensure AI-generated content remains within acceptable limits. You can also have humans review the content for coherence and accuracy and make relevant corrections.

  1. Enhanced attack efficiency

Cybercriminals are now leveraging GenAI technologies to automate and scale their attacks. AI-enabled cyberattacks utilize AI to automate and improve traditional cyberattack capabilities, making them more targeted, sophisticated, and hard to detect. Implementing posture management and AI-powered anomaly detection systems can help safeguard against AI-driven cyberattacks.

Endnote

While adopting GenAI in business can be rewarding, it comes with a certain level of risk. Familiarizing yourself with GenAI risks, including privacy and data security, misinformation, fraud and identity theft, and more, can help you take proactive measures to safeguard your business.

]]>
606672
Google Expands AI Overviews To Six New Countries https://www.webpronews.com/google-expands-ai-overviews-to-six-new-countries/ Mon, 19 Aug 2024 18:34:11 +0000 https://www.webpronews.com/?p=606615 Google is expanding its AI Overviews to six additional countries, as well as adding new abilities to the feature.

Google introduced AI Overviews in mid-March, a new feature that uses generative AI to answer people’s questions. The company is building on that launch with new features, including easier access to relevant links.

Now, we’re introducing more ways to check out relevant websites while you search, with a new right-hand link display for AI Overviews on desktop – also accessible on mobile by tapping the site icons on the upper right. These updates are starting to roll out globally today for AI Overviews in all launched countries, as well as for Search Labs users in more than 120 countries and territories.

We’re also currently testing the addition of links to relevant web pages directly within the text of AI Overviews (in addition to the prominent links we already show), making it even easier for people to click out and visit sites that interest them.

The company is also adding the ability to save AI Overviews for future reference and to simplify the language of the response.

First is the ability to “save” a specific AI Overview for future reference, and get back to the helpful info and links you found on your initial search, like “what are some interesting science projects I can do with my 12 year old son.” Just tap the new “save” button underneath your AI Overview, and when you conduct the same search again, you’ll get the same AI Overview in your results. This makes it even easier to access, and click out to the content you were interested in. You can also see your saved AI Overviews by tapping on your profile icon from Search and navigating to your Interests page.

This capability is available for English queries in the U.S. – just make sure you’ve enrolled in the “AI Overviews and more” experiment in Search Labs.

Second, you’ll now see an option on some AI Overviews to simplify the language with a single tap, which we previewed earlier this year. This can be helpful if you’re new to a topic and want an easier way to digest the information. This is available if you’re enrolled in the “AI Overviews and more” experiment in Search Labs, for English queries in the U.S.

Finally, Google says it is expanding access to AI Overviews to six new countries, including “United Kingdom, India, Japan, Indonesia, Mexico and Brazil – along with local language support in each country.”

A Troubled Launch

Despite Google touting AI Overviews as the next evolution of search, the feature wasn’t without its issues. AI Overviews gained notoriety for some hilariously bad answers, including telling users to put glue on pizza to help hold the cheese on. The responses drew criticism and mocking from the company’s competitors with Perplexity.AI poking fun at it on X.

The issues led Liz Reid, VP and Head of Google Search to address them and outline the company’s efforts to address them. The fact that Google is bringing AI Overviews to new markets would imply the company believes it is past the feature’s embarrassing faux pas.

]]>
606615
Silent Upgrade: Rumors Swirl Around ChatGPT’s New Capabilities and OpenAI’s Secret Projects https://www.webpronews.com/silent-upgrade-rumors-swirl-around-chatgpts-new-capabilities-and-openais-secret-projects/ Mon, 19 Aug 2024 14:07:39 +0000 https://www.webpronews.com/?p=606592 In recent days, the AI community has been buzzing with rumors and speculation about a possible silent upgrade to OpenAI’s ChatGPT. Users noticed subtle yet significant improvements in the model’s performance, prompting discussions about whether a new version of the AI had been quietly rolled out. The speculation turned out to be true when OpenAI confirmed that a new model, dubbed GPT-4.0, had indeed been running in ChatGPT since last week.

A Low-Key Rollout with Big Impacts

The confirmation came via a modest tweet from OpenAI, stating, “There’s a new GPT-4.0 model out in ChatGPT since last week. Hope you’re all enjoying it and check it out if you haven’t. We think you’ll like it.” This announcement followed days of anecdotal reports from users who observed a marked improvement in ChatGPT’s capabilities. Matt Schumer, CEO of Hyperight, noted that the new version provided “better vibes on an output than 3.5,” signaling a noticeable enhancement in the AI’s response quality.

Despite the excitement among some users, the silent nature of the upgrade has sparked a broader debate about transparency in AI development. Critics, including Professor Ethan Mollik, have voiced concerns over the lack of release notes accompanying the update. Mollik pointed out that users who rely on ChatGPT for real tasks need clear guidance on what has changed. This sentiment was echoed by Gradio founder Abubakr Abid, who highlighted the challenges of reproducibility in academic research when the underlying model can change without notice.

Project Strawberry and the Next Frontier in AI

Adding to the intrigue, the upgrade appears to be tied to OpenAI’s secretive Project Strawberry, which has been a hot topic on social media. Project Strawberry is believed to be at the forefront of AI innovation, and recent reports suggest that it might be connected to the development of the SusColumnR model—a smaller yet highly advanced AI designed for superior reasoning capabilities. The connection between these models and the ongoing development of GPT-5 raises interesting possibilities about the future of AI.

AI insiders, like the well-known Jimmy Apples, have speculated that the ChatGPT-4.0 update could be just a preview of more substantial advancements to come. There are rumors that OpenAI might be preparing to release a much larger model, potentially a 70-billion-parameter version, which could redefine the expectations for AI performance, particularly in comparison to competitors like Google’s upcoming models.

The Strategic Silence

The strategic silence surrounding these updates is not unprecedented. OpenAI has a history of rolling out significant changes quietly, perhaps to avoid the intense scrutiny that often follows major tech announcements. This approach has its critics, with some users expressing frustration over the lack of transparency. “Another release without release notes,” wrote Professor Mollik. “People actually use ChatGPT for real tasks and need guidance.”

However, this silence might also be part of a broader strategy to stay ahead in the competitive AI landscape. Historically, OpenAI has timed its releases to outshine Google’s advancements, often launching updates just days before Google’s own announcements. The timing of the GPT-4.0 release has led to speculation that OpenAI might be positioning itself for another major update, possibly linked to Project Strawberry, that could overshadow any new developments from its rivals.

The Bigger Picture: AGI and Beyond

Beyond the immediate improvements, these updates could signify a step closer to achieving Artificial General Intelligence (AGI). The SusColumnR model, rumored to be part of Project Strawberry, has demonstrated remarkable reasoning abilities, leading some to speculate that OpenAI is inching closer to AGI-level systems. If true, this would mark a significant milestone in AI development, with far-reaching implications for industries across the globe.

OpenAI’s commitment to safety and ethical considerations is likely a key factor in the careful rollout of these technologies. The company has hinted that only models with a post-mitigation score of medium or below will be deployed, ensuring that more powerful models are used responsibly. This cautious approach is crucial as AI systems become increasingly capable and potentially disruptive.

AI Arms Race Heats Up

As the AI arms race heats up, OpenAI’s strategic moves, including the quiet release of GPT-4.0, suggest that the company is preparing for even bigger leaps in AI technology. With the potential release of a larger model and the ongoing developments under Project Strawberry, the future of AI looks poised for groundbreaking advancements. Whether these innovations will bring us closer to AGI or simply push the boundaries of what AI can do remains to be seen. But one thing is clear: the world of AI is evolving rapidly, and OpenAI is at the forefront of this exciting journey.

]]>
606592
Microsoft Warns Copilot AI Should Not Be Used For Professional Advice https://www.webpronews.com/microsoft-warns-copilot-ai-should-not-be-used-for-professional-advice/ Thu, 15 Aug 2024 20:36:59 +0000 https://www.webpronews.com/?p=606510 Microsoft has updated its Microsoft Services Agreement, warning users they should not rely on Copilot AI for professional advice.

Artificial intelligence is the hottest trend in tech, with users across industries trying to push the boundaries of what it can do. In some cases, this has resulted in embarrassing issues when AI models have provided false—even illegal—advice.

In Microsoft’s latest terms make clear that users should not put too much stock in results produced by Copilot AI:

Assistive AI. AI services are not designed, intended, or to be used as substitutes for professional advice.

Companies and organizations have continued to struggle with issues pertaining to the advice that AI chatbots and models provide. For example, New York City’s MyCity AI chatbot drew criticism for giving bad advice, including suggestions that were illegal. Examples includes advising business owners that they could take workers’ tips, landlords could engage in income-based discrimination, and saying stores are not requires to accept cash.

Microsoft is clearly working to temper people’s expectations, at least from a legal perspective.

]]>
606510
AIs At Risk Of ‘Model Autophagy Disorder,’ AI’s Mad Cow Disease https://www.webpronews.com/ais-at-risk-of-model-autophagy-disorder-ais-mad-cow-disease/ Mon, 12 Aug 2024 12:00:00 +0000 https://www.webpronews.com/?p=606383 A new study is painting a dire picture of the future of AI, saying AI models are at risk of ‘Model Autophagy Disorder’ (MAD), the digital equivalent to mad cow disease.

Companies are racing to train AIs on as much data as they can feed them, gobbling up text, images, videos, and more. The practice has drawn criticism, however, raising questions of ownership and copyright. The controversy has led some companies to evaluate synthetic data as an alternative to real data, but Rice University is warning of a major downside.

“The problems arise when this synthetic data training is, inevitably, repeated, forming a kind of a feedback loop ⎯ what we call an autophagous or ‘self-consuming’ loop,” said Richard Baraniuk, Rice’s C. Sidney Burrus Professor of Electrical and Computer Engineering. “Our group has worked extensively on such feedback loops, and the bad news is that even after a few generations of such training, the new models can become irreparably corrupted. This has been termed ‘model collapse’ by some ⎯ most recently by colleagues in the field in the context of large language models (LLMs). We, however, find the term ‘Model Autophagy Disorder’ (MAD) more apt, by analogy to mad cow disease.”

The report goes on to highlight the etymology of the MAD label.

Mad cow disease is a fatal neurodegenerative illness that affects cows and has a human equivalent caused by consuming infected meat. A major outbreak in the 1980-90s brought attention to the fact that mad cow disease proliferated as a result of the practice of feeding cows the processed leftovers of their slaughtered peers ⎯ hence the term “autophagy,” from the Greek auto-, which means “self,”’ and phagy ⎯ “to eat.”

To test the phenomenon, scientist used synthetic data in AI training loops to see what the result would be. The results were both revealing and disturbing.

Progressive iterations of the loops revealed that, over time and in the absence of sufficient fresh real data, the models would generate increasingly warped outputs lacking either quality, diversity or both. In other words, the more fresh data, the healthier the AI.

Side-by-side comparisons of image datasets resulting from successive generations of a model paint an eerie picture of potential AI futures. Datasets consisting of human faces become increasingly streaked with gridlike scars ⎯ what the authors call “generative artifacts” ⎯ or look more and more like the same person. Datasets consisting of numbers morph into indecipherable scribbles.

Progressive Artifact Amplication – Credit Digital Signal Processing Group/Rice University

The scientist ultimately concluded that AI models must consume a diet of real data to avoid MADness.

“Our theoretical and empirical analyses have enabled us to extrapolate what might happen as generative models become ubiquitous and train future models in self-consuming loops,” Baraniuk said. “Some ramifications are clear: without enough fresh real data, future generative models are doomed to MADness.”

Sampling With Generational Bias – Credit Digital Signal Processing Group Rice University

The study, in all its detail, is a revealing look at the future of AI models and could have profound implications on where AI will fit in. Proponents who claim AI will continue to revolutionize industries and become ever more capable, as well as critics who fear AI will wipe out entire industries and job descriptions, may both be in for a surprise.

Unless AI researchers can solve the MAD dilemma, the human element will never full be replaced, since there will always be a need for fresh data that AI can continue to be trained on.

]]>
606383
Google’s Gemini Pro Takes The Lead In Chatbot Arena https://www.webpronews.com/googles-gemini-pro-takes-the-lead-in-chatbot-arena/ Fri, 02 Aug 2024 13:41:03 +0000 https://www.webpronews.com/?p=606127 Google’s Gemini, despite a rocky start, appears to be gaining its footing, with the most recent model beating GPT-4o and Claude 3.5 in the LMSYS Chatbot Arena leaderboard.

The Chatbot Arena is an open platform designed to provide comparisons of the top chatbots’ capabilities to see which one is more advanced at any given point. OpenAI’s GPT models and Anthropic’s Claude have dominated the Chatbot Arena, at least until now.

According to an X post by LMSYS, Google’s 1.5 Pro has now surpassed both and holds the top spot on the leader board.

The news is a big win for Google, especially given the challenges and internal turmoil the company experienced developing Gemini, then known as Bard. Early feedback on Bard from Google employees called the chatbot “cringe-worthy,” and said its responses could lead to “serious injury or death.”

Google CEO Sundar Pichai was criticized for leading the company’s flawed Bard/Gemini rollout, with the issues a contributing factor in some critics calling for his removal as CEO.

Google has clearly made significant headway since those early issues, and are now reaping the reward as Gemini is suddenly the AI model to beat.

]]>
606127
Perplexity CEO Unveils Groundbreaking Publishers Program Amid AI Copyright Controversies https://www.webpronews.com/perplexity-ceo-unveils-groundbreaking-publishers-program-amid-ai-copyright-controversies/ Thu, 01 Aug 2024 17:26:55 +0000 https://www.webpronews.com/?p=606111 Aravind Srinivas, co-founder and CEO of Perplexity, joined CNBC’s “Squawk Box” to announce the launch of the company’s pioneering Publishers Program, a move set to transform the landscape of AI-generated content. The program will grant partners revenue sharing and access to Perplexity’s API, addressing the increasing concerns about AI, copyright, and plagiarism.

Innovative Partnership Model

Srinivas emphasized the strategic importance of high-quality content for Perplexity’s success. “From the beginning, we realized that to succeed as a product, we need to use high-quality sources of information,” he stated. “This requires us to work closely with the right publishing partners and create a robust ecosystem.” The new program includes prestigious names like Time, Fortune, and Entrepreneur, marking a significant milestone in Perplexity’s growth.

Revenue Sharing and Usage-Based Model

Unlike traditional models where platforms take content and pay for it, Perplexity’s approach is unique. “We’re establishing a new relationship where any advertising revenue generated from a query that uses our publishing partners’ information will be shared with them,” Srinivas explained. This usage-based revenue-sharing program aims to be a more sustainable way to collaborate with publishers, ensuring mutual benefits.

AI and Content Aggregation

Addressing concerns about AI’s impact on the “blue link economy,” Srinivas clarified that Perplexity is not merely redirecting traffic but enhancing user engagement through integrated advertising. “We are introducing advertising products on our platform where sponsored questions will follow a query, sharing the revenue with the publishers involved in the original answer,” he said.

Perplexity also helps publishers build their own AI engines using the company’s technology, enabling them to leverage AI for content delivery on their platforms. “We want not just the technology but a sustainable way for anybody on the internet to get an accurate answer,” Srinivas noted.

Addressing Criticism and Copyright Concerns

The rise of AI-generated content has sparked debates about copyright and intellectual property. Forbes’ editor recently criticized Perplexity for allegedly stealing content. Srinivas responded, highlighting the platform’s commitment to transparency and attribution. “We always attribute sources of information and prominently display them, including the names of journalists for their clips,” he said. “We can always do better, and the feedback from our partners is crucial in improving our product.”

When pressed about past mistakes and potential compensation, Srinivas remained focused on future improvements. “I’m not going into specific issues, but our aim is to create a sustainable way to share revenue with our partners, ensuring high accuracy and integrity of information.”

A New Era of AI-Driven Content

The partnership agreements with major publishers are designed to foster collaboration rather than conflict. “The program is about revenue sharing, not the specifics of content usage,” Srinivas noted. “We’re aggregators of information, not trying to train our AI on proprietary data without permission.”

As AI continues to reshape content creation and distribution, Perplexity’s innovative model could set a new standard for the industry. By aligning with reputable publishers and ensuring ethical practices, the company aims to balance technological advancement with respect for intellectual property.

Future Prospects

Perplexity’s announcement marks a significant step in the evolution of AI and content marketing. The company’s approach to revenue sharing and its commitment to high-quality, ethically sourced information could pave the way for more collaborative and sustainable AI-driven content ecosystems.

Srinivas concluded, “We want to create a sustainable model that benefits both our users and partners, ensuring the integrity and accuracy of the information we provide.”

]]>
606111
Apple Intelligence Will Reportedly Not Debut Until iOS 18.1 https://www.webpronews.com/apple-intelligence-will-reportedly-not-debut-until-ios-18-1/ Mon, 29 Jul 2024 12:00:00 +0000 https://www.webpronews.com/?p=606024 Apple Intelligence will reportedly be delayed, missing the iOS 18 release, and will be included in iOS 18.1 instead.

According to Bloomberg’s Mark Gurman, Apple Intelligence will not launch when it was expected. Instead, the feature will be delivered via an iOS 18.1 software update later in 2025, likely by October.

Apple Intelligence is poised to be one of the biggest upgrades to iOS in years, leveraging generative AI to improve Siri and unlock a host of new abilities. In its WWDC presentation, Apple demonstrated numerous examples of how generative AI could be used in day-to-day life, something other companies have struggled to do.

Despite the promise of Apple Intelligence, Apple fans will apparently have to wait a bit longer to make use of it.

]]>
606024
OpenAI Releases GPT-4o mini, A Cost-Effective AI Model https://www.webpronews.com/openai-releases-gpt-4o-mini-a-cost-effective-ai-model/ Thu, 18 Jul 2024 20:35:28 +0000 https://www.webpronews.com/?p=605831 OpenAI announced the release of GPT-4o mini, the company’s “most cost-efficient small model” aimed at making AI “as broadly accessible as possible.”

OpenAI unveiled GPT-4o in mid-May, showing off the AI model’s real-time capabilities. GPT-4o is the company’s most powerful model to date, boasting impressive abilities ranging from deciphering written math equations to understanding mood and context.

The company is building on that success with GPT-4o mini, which “outperforms GPT-4 on chat preferences in LMSYS leaderboard, the crowdsourced platform that evaluates large language models. Just as impressive, OpenAI says the new model is 60% cheaper than GPT3.5 Turbo.

GPT-4o mini currently includes support for text and vision, but will add support for image, audio, and video inputs and outputs in future updates.

GPT-4o mini surpasses GPT-3.5 Turbo and other small models on academic benchmarks across both textual intelligence and multimodal reasoning, and supports the same range of languages as GPT-4o. It also demonstrates strong performance in function calling, which can enable developers to build applications that fetch data or take actions with external systems, and improved long-context performance compared to GPT-3.5 Turbo.

OpenAI highlights three key areas where GPT-4o mini performs well in benchmarks.

Reasoning tasks: GPT-4o mini is better than other small models at reasoning tasks involving both text and vision, scoring 82.0% on MMLU, a textual intelligence and reasoning benchmark, as compared to 77.9% for Gemini Flash and 73.8% for Claude Haiku.

Math and coding proficiency: GPT-4o mini excels in mathematical reasoning and coding tasks, outperforming previous small models on the market. On MGSM, measuring math reasoning, GPT-4o mini scored 87.0%, compared to 75.5% for Gemini Flash and 71.7% for Claude Haiku. GPT-4o mini scored 87.2% on HumanEval, which measures coding performance, compared to 71.5% for Gemini Flash and 75.9% for Claude Haiku.

Multimodal reasoning: GPT-4o mini also shows strong performance on MMMU, a multimodal reasoning eval, scoring 59.4% compared to 56.1% for Gemini Flash and 50.2% for Claude Haiku.

The company says users can now access GPT-4o mini in place of GPT-3.5 across plans.

In ChatGPT, Free, Plus and Team users will be able to access GPT-4o mini starting today, in place of GPT-3.5. Enterprise users will also have access starting next week, in line with our mission to make the benefits of AI accessible to all.

]]>
605831
Amazon Deploys Rufus AI Shopping Assistant to All US Customers https://www.webpronews.com/amazon-deploys-rufus-ai-shopping-assistant-to-all-us-customers/ Sat, 13 Jul 2024 13:00:00 +0000 https://www.webpronews.com/?p=605716 Amazon has made its generative AI-powered shopping assistant, Rufus, available to all US customers after several months of beta testing.

Rufus was introduced in February 2024 to a small subset of Amazon mobile app customers. The AI chatbot is designed to help answer questions, provide information, and inform shopping decisions. The company has used the feedback from the beta period to improve the chatbot, and is now rolling it out to all US customers.

Amazon says Rufus helps answer questions based on the information that is available for various products:

Customers are asking Rufus specific product questions, and Rufus is sharing answers based on the helpful information found in product listing details, customer reviews, and community Q&As. Customers are asking Rufus questions like, “Is this coffee maker easy to clean and maintain?” and “Is this mascara a clean beauty product?” They’re also clicking on the related questions that Rufus surfaces in the chat window to learn more about the product—for example, “What’s the material of the backpack?” Customers can also tap on “What do customers say?” to get a quick and helpful overview of customer reviews.

The AI chatbot is also able to help customers easily compare products:

Customers are using Rufus to quickly compare features by asking questions like, “What’s the difference between gas and wood fired pizza ovens?” Aspiring runners are asking questions such as, “Should I get trail shoes or running shoes?” and people shopping for TVs are asking Rufus to, “Compare OLED and QLED TVs.” I recently used Rufus to help me compare options and find my son his first baseball glove—“Comfortable baseball gloves for a 9 year old beginner.” I ended up buying this one, if you’re curious.

Interestingly, because Rufus is based on generative AI and trained to answer a wide variety of questions, it is able to answer questions that are not directly related to a purchase:

Because Rufus can answer a wide range of questions, it can help customers at any stage of their shopping journey. A customer interested in cookware may first ask, “What do I need to make a soufflé?” Preparing for special occasions is also popular among customers, with shoppers asking questions like, “What do I need for a summer party?”

Amazon’s AI chatbot is a good example of some of the tangible ways AI can be used to improve the consumer experience and surface helpful information and inform decisions.

]]>
605716
AWS Unveils $50 Million Initiative to Help Public Sector Adopt Generative AI https://www.webpronews.com/aws-unveils-50-million-initiative-to-help-public-sector-adopt-generative-ai/ Wed, 10 Jul 2024 12:35:00 +0000 https://www.webpronews.com/?p=605609 AWS has announced a $50 million initiative to help the public sector adopt generative AI and accelerate innovation.

The AWS Public Sector Generative Artificial Intelligence (AI) Impact Initiative was announced in late June, and relies on AWS generative AI services, including Amazon Bedrock, Amazon SageMaker, Amazon Q, AWS HealthScribe, AWS Inferentia, and AWS Trainium.

AWS says the public sector is trying to adopt and leverage generative AI, but faces unique challenges and limitations.

Across the public sector, leaders are seeking to leverage generative AI to become more efficient and agile. However, public sector organizations face several challenges such as optimizing resources, adapting to changing needs, improving patient care, personalizing the education experience, and strengthening security. To respond to these challenges, AWS is committed to helping public sector organizations unlock the potential of generative AI and other cloud-based technologies to positively impact society.

The company’s $50 million commitment will go toward training, technical expertise, and more.

As part of this initiative, AWS is committing up to $50 million in AWS Promotional Credits, training, and technical expertise across generative AI projects. Credit issuance determinations will be based on a variety of factors, including but not limited to the customer’s experience developing new technology solutions, the maturity of the project idea, evidence of future solution adoption, and the customer’s breadth of generative AI skills. The Impact Initiative is open to new or existing AWS Worldwide Public Sector customers and partners from enterprises worldwide who are building generative AI solutions to help solve society’s most pressing challenges.

]]>
605609
Apple May Include Google Gemini At A.I. Launch, Anthropic May Onboard Later https://www.webpronews.com/apple-may-include-google-gemini-at-a-i-launch-anthropic-may-onboard-later/ Tue, 02 Jul 2024 16:30:59 +0000 https://www.webpronews.com/?p=605510 The latest report indicates that Apple may include Google Gemini alongside ChatGPT when its Apple Intelligence (A.I.) launches, with Anthropic a possible later addition.

Apple unveiled A.I. at WWDC 2024. While the company emphasized its own on-device models, as well as its Private Cloud Compute for more advanced queries, the company revealed it had signed a deal to make ChatGPT available to users that wanted access to it.

Shortly after, Apple made clear that it was open to working with other AI firms with the goal of giving users the choice of what model they want to use. According to Bloomberg’s Mark Gurman, Apple may already be preparing to incorporate Google’s Gemini alongside ChatGPT at the launch of A.I.

As for an Apple deal with Google or Anthropic, I expect at least the former to be announced around the time Apple Intelligence launches this fall.

It’s interesting that Gurman mentions Anthropic. Apple was reportedly in talks with Meta, but quickly opted to pass on any deal. According to Gurman, the fundamental issues were both privacy and capabilities. Apple has long been critical of Meta’s stand on user privacy, making any deal to use the social media company’s AI models problematic at best. What’s more, Apple evidently sees OpenAI, Google, and Anthropic as having superior offerings to Meta.

That last point is particularly good news for Anthropic. The company, which was founded by former OpenAI executives, has been working to set itself apart as a more responsible AI firm than OpenAI. The fact that it recently hired Jan Leikie, OpenAI’s former safety team lead, has only helped support its efforts. The company has also been making headlines for its Claude model, with the latest version soundly beating OpenAI’s GPT-4o.

If Apple opts to incorporate Anthropic’s AI models at some point in the near future, it would be a big boost to the AI firm and help it fully come out from OpenAI’s shadow. In the meantime, users may at least have a choice of two of the leading options when A.I. officially launches.

]]>
605510