MachineLearningPro https://www.webpronews.com/emergingtech/machinelearningpro/ Breaking News in Tech, Search, Social, & Business Fri, 09 Aug 2024 13:06:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 MachineLearningPro https://www.webpronews.com/emergingtech/machinelearningpro/ 32 32 138578674 OpenAI Warns of Emotional Attachments to GPT-4o Voice Mode Amid New System Update https://www.webpronews.com/openai-warns-of-emotional-attachments-to-gpt-4o-voice-mode-amid-new-system-update/ Fri, 09 Aug 2024 13:02:19 +0000 https://www.webpronews.com/?p=606327 In a world where technology increasingly intersects with human emotions, OpenAI’s latest update to its ChatGPT platform, introducing the GPT-4o voice mode, has sparked both excitement and concern. The voice mode, which allows users to interact with ChatGPT using natural spoken language, represents a significant leap forward in artificial intelligence. However, OpenAI itself has cautioned that this new feature could lead to users forming emotional attachments to the AI, a development that carries both societal implications and ethical dilemmas.

A Technological Leap with Human Consequences

The introduction of GPT-4o’s voice mode represents a significant technological leap, bringing with it profound implications for human-machine interaction. This new feature enables users to engage in conversations with the AI using a natural, human-like voice, which not only enhances accessibility and user experience but also blurs the line between human and machine. This development raises critical questions about the future of human relationships with AI and the ethical responsibilities that creators like OpenAI must navigate.

One of the most pressing concerns is the potential for users to form emotional attachments to AI, a phenomenon that experts have been warning about for years. Dr. Sherry Turkle, a professor at MIT who has studied the psychological effects of technology, cautions that “when technology becomes this intimate, we must ask ourselves what kinds of relationships we are fostering and what it means for our connections with real people.” The emotional weight carried by spoken words, as opposed to text, could make these interactions even more impactful, leading users to perceive AI as more than just a tool.

Emotional Attachment is Inevitable

Dr. Kate Darling, a researcher at the MIT Media Lab, echoes these concerns, noting that “the more lifelike and responsive an AI becomes, the easier it is for humans to project human characteristics onto it.” This projection, she argues, can lead to emotional attachment, which might have complex implications for how we interact with and depend on AI systems. Early testers of GPT-4o have reported feeling a sense of connection with the AI, describing its voice responses as “comforting” and “reassuring.” Such feedback suggests that the AI’s human-like capabilities could fulfill emotional needs traditionally met by human relationships.

However, this emotional engagement is not without its risks. As AI becomes more integrated into daily life, the potential for over-reliance grows, which could impact mental health and social dynamics. OpenAI has acknowledged these risks, emphasizing the importance of ongoing monitoring and the implementation of safeguards to prevent unintended consequences. Yet, as Dr. Darling points out, “the broader societal implications require ongoing discussion and careful consideration.”

The introduction of voice mode in AI systems like GPT-4o is a double-edged sword. While it offers exciting new possibilities for communication and interaction, it also necessitates a careful examination of the potential human consequences. As society moves forward with these innovations, the balance between technological advancement and ethical responsibility will be crucial in shaping a future where AI serves humanity without compromising our emotional well-being.

Safety and Ethical Considerations

The rollout of GPT-4o’s voice mode has been accompanied by heightened scrutiny around safety and ethical considerations. OpenAI has proactively addressed some of these concerns in its recently published GPT-4o System Card, which outlines the measures taken to mitigate potential risks. Among the foremost concerns is the unauthorized generation of voice content. Dr. Kate Crawford, a senior researcher at Microsoft Research, emphasizes the importance of controlling this capability: “The ability to generate synthetic voices that closely mimic real human speech opens up avenues for misuse, from fraud to deepfakes. It is critical that companies like OpenAI implement robust safeguards to prevent these technologies from being weaponized.”

To this end, OpenAI has implemented stringent measures to prevent the generation of unauthorized voice content, including the use of classifiers to detect deviations from approved voice presets. The company has also taken steps to ensure that the model cannot identify individuals based on their voice, which addresses privacy concerns. “Protecting user privacy and preventing misuse is paramount,” said Mira Murati, OpenAI’s Chief Technology Officer, during a recent interview. “We’ve worked extensively to ensure that the voice mode cannot be used to infringe on personal privacy or to impersonate individuals.”

Potential To Generate Inappropriate Content

Another significant concern is the potential for the voice mode to generate harmful or inappropriate content. The System Card outlines how OpenAI has adapted its existing content moderation systems to apply to audio outputs, filtering out violent, erotic, or otherwise disallowed speech. Despite these safeguards, some experts believe that the technology’s rapid advancement necessitates continuous oversight. “The challenge with AI is that it evolves faster than the regulatory frameworks meant to govern it,” warns Dr. Timnit Gebru, a prominent AI ethics researcher. “We need to ensure that companies like OpenAI are not just setting their own rules but are also subject to external, independent oversight.”

The ethical considerations surrounding GPT-4o’s voice mode also extend to its impact on societal norms. As AI becomes more integrated into human communication, the lines between human and machine interactions could become increasingly blurred. “There’s a risk that as people grow accustomed to interacting with AI in human-like ways, they may start to expect similar interactions from real humans, which could alter social dynamics,” notes Dr. Margaret Mitchell, an AI ethics expert and former co-lead of Google’s Ethical AI team. This shift underscores the importance of ongoing dialogue about the ethical implications of AI technologies and the need for a collaborative approach to addressing these challenges.

As GPT-4o continues to evolve, the balance between innovation and ethical responsibility will remain a focal point for both developers and society at large. OpenAI’s efforts to address these concerns are commendable, but the broader conversation around AI ethics and safety must continue, involving a diverse range of stakeholders to ensure that the technology is developed and deployed in ways that truly benefit humanity.

Reactions and Implications

The introduction of GPT-4’s voice mode has sparked significant discussion, particularly regarding its potential implications for both user experience and broader societal impacts. The ability for the AI to respond differently depending on how one speaks has intrigued many users, with some, like Aditya Singh, noting that this feature could make interactions more engaging. Singh mentioned, “I’d honestly prefer this, of course with some obvious limitations. Like if you’re enthusiastic, it should radiate that energy back.” This highlights a growing interest in making AI interactions feel more personalized and human-like, which could enhance user satisfaction but also raises questions about the consistency and reliability of responses.

Potential for Misuse

Others, however, have expressed concerns about the potential for misuse, particularly in the realm of impersonation and disinformation. For instance, Sean McLellan pointed out, “These are features, not bugs,” emphasizing that while the technology’s capabilities are impressive, they could easily be exploited if not properly managed. This sentiment is echoed by Hector Aguirre, who warned, “Without guardrails, it’s a recipe for disaster.” The ability to imitate voices or generate speech that sounds convincingly human could lead to scenarios where false information is spread more effectively, particularly in sensitive contexts such as elections or personal communication.

Some users, like Space Man, have already started considering the implications of this technology in a political context, noting, “Jesus, hadn’t considered some of these. Especially in context of the election. Wouldn’t be hard to fake ‘past phone calls’ of presidential candidates.” This concern is particularly relevant in an era where misinformation can quickly become viral, especially on platforms like X (formerly Twitter). The potential for GPT-4’s voice mode to be used in creating realistic-sounding but entirely fabricated audio clips could amplify the risks of such disinformation campaigns.

Mixed Reactions

On the other hand, there are voices like that of Unemployed Capital Allocator who foresee the eventual open-sourcing of such technology, predicting, “All coming to open source, next year.” This raises further questions about how widely accessible this powerful technology could become and whether the safeguards currently in place by organizations like OpenAI will be sufficient to prevent misuse once the technology is in the public domain.

The mixed reactions from the community illustrate the dual-edged nature of GPT-4’s voice mode. While it promises to enhance AI-human interaction in exciting ways, it also opens up avenues for significant ethical and security challenges. As the technology continues to develop, the onus will be on both developers and regulators to ensure that these powerful tools are used responsibly, balancing innovation with the need to protect against potential harms.

Moving Forward

As we look ahead to the future of AI, the deployment of GPT-4’s voice mode represents both a significant technological advancement and a set of profound challenges. OpenAI has made it clear that ensuring the responsible use of this technology is paramount. An OpenAI spokesperson emphasized, “Our priority is to ensure that these technologies are used responsibly,” signaling the company’s commitment to safeguarding against potential misuse.

The road forward will require continuous vigilance and adaptation. With the rapid pace of AI development, safety measures that are effective today might not be adequate in the near future. Sean Fumo, a technology analyst, pointed out the necessity for proactive measures: “We need to anticipate new risks and be proactive in addressing them.” This includes the ongoing refinement of technical safeguards as well as increasing public awareness about the implications and appropriate use of AI-driven voice technologies.

Shared Responsibility is Crucial

Collaboration between AI developers, policymakers, and the public will be crucial in navigating these uncharted waters. Steven Strauss, a digital ethics expert, remarked, “It’s not just about what the technology can do, but how we choose to use it.” This highlights the shared responsibility in guiding the ethical trajectory of AI advancements, ensuring that the benefits are maximized while minimizing potential harms.

Moreover, OpenAI’s commitment to transparency and improvement will be vital as the technology evolves. By engaging with external experts and making detailed system cards publicly available, OpenAI sets a high standard for responsible AI development. This ongoing dialogue and openness are critical as society adjusts to the increasingly prominent role of AI in daily life.

As we move forward, the integration of voice technology into various aspects of life will require not just technical innovation but also a robust framework for ethical decision-making and societal oversight. The future of AI will depend on our collective ability to harness its power responsibly and ensure that it serves the greater good.

]]>
606327
Crafting a Machine Learning Career: A Roadmap for Aspiring Analysts https://www.webpronews.com/crafting-a-machine-learning-career-a-roadmap-for-aspiring-analysts/ Tue, 09 Jul 2024 21:08:28 +0000 https://www.webpronews.com/?p=605640 In the rapidly evolving technology landscape, machine learning has emerged as a cornerstone, driving innovation and efficiency across industries. Santiago Valdarrama, a seasoned machine learning engineer and YouTube content creator from the channel Underfitted, offers a comprehensive roadmap for those eager to dive into this dynamic field. Drawing on over two decades of experience working with industry giants such as Disney, Boston Dynamics, IBM, and Dell, Valdarrama provides a clear and structured path for aspiring machine learning professionals.

Building a Strong Foundation

The journey into machine learning begins with a solid understanding of Python, the programming language that dominates the field. “Start with Python,” Valdarrama advises. “Every scientific research paper is written in Python, all libraries are in Python, and it’s the language we use to communicate in the AI community.” He recommends Udacity’s intermediate Python program for those with some prior knowledge, noting that a plethora of free tutorials are available online for complete beginners.

Understanding Python is not a one-time checkbox but a continuous learning process. Valdarrama emphasizes, “I started learning Python after 20 years of coding in other languages, and even now, I feel like I’ve just scratched the surface.”

Immersive Learning with Kaggle and Google

Once comfortable with Python, Valdarrama suggests diving into machine learning with Kaggle tutorials. These tutorials are concise and beginner-friendly, providing a gentle introduction to essential machine learning concepts. “The ‘Intro to Machine Learning’ tutorial on Kaggle is a great starting point,” he notes. Following this, the ‘Intermediate Machine Learning’ tutorial offers further insights and practical experience.

The next step is the Google Machine Learning Crash Course, a comprehensive program consisting of 25 lessons spread over 15 hours. Originally designed to upskill Google’s own teams, this course is free and accessible, offering a robust intermediate-level education in machine learning.

Advanced Learning: Coursera Specialization

For those ready to tackle more advanced topics, Valdarrama recommends the Machine Learning Specialization on Coursera. This paid course requires a monthly subscription but provides in-depth knowledge and hands-on experience with more complex machine learning algorithms and mathematical concepts. “This specialization is more formal and includes rigorous math, making it a perfect bridge to advanced machine learning,” he explains.

Leveraging University Courses

Top universities, including MIT, NYU, and Cornell, have made their machine learning and deep learning courses available online for free. Valdarrama encourages students to take advantage of these resources, highlighting the MIT 6.S191 Introduction to Deep Learning and NYU’s Deep Learning course as excellent options.

Essential Reading

Valdarrama also shares his top book recommendations for budding machine learning professionals:

  1. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurélien Géron: This book offers a comprehensive overview, from basic decision trees to advanced neural networks.
  2. Deep Learning with Python by François Chollet: Written by the creator of Keras, this book provides a full arc of a deep learning project, from data collection to deployment.
  3. Machine Learning with PyTorch and Scikit-Learn by Sebastian Raschka: This book focuses on PyTorch, another essential tool for machine learning engineers.

For those interested in generative AI and building applications with large language models, Valdarrama recommends Generative AI with LangChain, which covers using AI APIs and constructing AI workflows.

Practical Advice for Learning

Valdarrama stresses the importance of solving real problems to learn machine learning effectively. He advises beginners to start with common datasets, such as the Titanic dataset or the MNIST dataset, to gain practical experience. “Work on problems that another 10,000 people have worked on before,” he says. “This way, you’ll find plenty of resources and solutions to help you when you get stuck.”

Sharing knowledge is another critical aspect of learning. Valdarrama encourages learners to find an outlet to explain what they’ve learned through blogging, social media, or video content. “Teaching others forces you to solidify your understanding,” he explains.

Following Curiosity

Machine learning is a vast field encompassing areas like computer vision, natural language processing, and time series analysis. Valdarrama advises students to explore broadly and then focus on the genuinely interesting areas. “Follow what makes you feel happy,” he says. “You’re more likely to become a specialist in a specific area rather than trying to master everything at once.”

With patience and perseverance, Valdarrama assures that a career in machine learning is within reach. His roadmap, filled with practical advice and rich resources, provides a clear path for those ready to embark on this exciting journey into the future of technology.

]]>
605640
Simplifying Complex Data for Machine Learning: Insights from IBM’s Martin Keen on Principal Component Analysis https://www.webpronews.com/simplifying-complex-data-for-machine-learning-insights-from-ibms-martin-keen-on-principal-component-analysis/ Mon, 08 Jul 2024 21:23:05 +0000 https://www.webpronews.com/?p=605645 In the age of big data, extracting meaningful insights from vast datasets is a daunting challenge. In a recent video, Martin Keen, a Master Inventor at IBM, delves into Principal Component Analysis (PCA) as a powerful tool for simplifying complex data. Keen’s discussion offers a detailed exploration of PCA, highlighting its applications in various fields such as finance and healthcare and underscoring its significance in machine learning.

Understanding Principal Component Analysis

Principal Component Analysis (PCA) is a statistical technique that reduces the dimensionality of large datasets while preserving most of the original information. “PCA reduces the number of dimensions in large data sets to principal components that retain most of the original information,” Keen explains. This reduction is crucial for simplifying data visualization, enhancing machine learning models, and improving computational efficiency.

Keen illustrates PCA’s utility with a risk management example. In this scenario, understanding which loans are similar in risk requires analyzing multiple dimensions, such as loan amount, credit score, and borrower age. “PCA helps identify the most important dimensions, or principal components, enabling faster training and inference in machine learning models,” Keen notes. Additionally, PCA facilitates data visualization by reducing the data to two dimensions, allowing for easier identification of patterns and clusters.

The practical benefit of PCA is seen when dealing with data that contains potentially hundreds or even thousands of dimensions. These dimensions can complicate the analysis and visualization process. For instance, in the financial industry, evaluating loans requires considering various factors, such as credit scores, loan amounts, income levels, and employment history. Keen explains, “Intuitively, some dimensions are more important than others when considering risk. For example, a credit score is probably more important than the years a borrower has spent in their current job.”

PCA allows analysts to discard less significant dimensions by focusing on the principal components, thereby streamlining the dataset. This process speeds up machine learning algorithms by reducing the volume of data that needs to be processed and enhances the clarity of data visualizations.

Historical Context and Modern Applications

PCA, credited to Carl Pearson in 1901, has gained renewed importance with the advent of advanced computing. Today, it is integral to data preprocessing in machine learning. “PCA can extract the most informative features while preserving the most relevant information from large datasets,” Keen states. This capability is vital in mitigating the “curse of dimensionality,” where high-dimensional data negatively impacts model performance.

The “curse of dimensionality” refers to the phenomenon where the performance of machine learning models deteriorates as the number of dimensions increases. This occurs because high-dimensional spaces make identifying patterns and relationships within the data difficult. PCA combats this by projecting high-dimensional data into a smaller feature space, simplifying the dataset without significant loss of information.

By projecting high-dimensional data into a smaller feature space, PCA also addresses overfitting, a common issue where models perform well on training data but poorly on new data. “PCA minimizes the effects of overfitting by summarizing the information content into uncorrelated principal components,” Keen explains. These components are linear combinations of the original variables that capture maximum variance.

Real-World Applications

Keen highlights several practical applications of PCA. In finance, PCA aids in risk management by identifying key variables that influence loan repayment. For example, by reducing the dimensions of loan data, banks can more accurately predict which loans are likely to default. This enables better decision-making and risk assessment.

In healthcare, PCA has been used to diagnose diseases more accurately. For instance, a study on breast cancer utilized PCA to reduce the dimensions of various data attributes, such as the smoothness of nodes and perimeter of lumps, leading to more accurate predictions using a logistic regression model. “PCA helps in identifying the most important variables in the data, which improves the performance of predictive models,” Keen notes.

PCA is also invaluable in image compression and noise filtering. “PCA reduces image dimensionality while retaining essential information, making images easier to store and transmit,” Keen explains. PCA effectively removes noise from data by focusing on principal components that capture underlying patterns. In image compression, PCA helps create compact representations of images, making them easier to store and transmit. This is particularly useful in applications such as medical imaging, where large volumes of high-resolution images need to be managed efficiently.

Moreover, PCA is widely used for data visualization. Datasets with dozens or hundreds of dimensions can be difficult to interpret in many scientific and business applications. PCA helps to visualize high-dimensional data by projecting it into a lower-dimensional space, such as a 2D or 3D plot. This simplification allows researchers and analysts to observe patterns and relationships within the data more easily.

The Mechanics of PCA

At its core, PCA involves summarizing large datasets into a smaller set of uncorrelated variables known as principal components. The first principal component (PC1) captures the highest variance in the data, representing the most significant information. “PC1 is the direction in space along which the data points have the highest variance,” Keen explains. The second principal component (PC2) captures the next highest variance and is uncorrelated with PC1.

Keen emphasizes that PCA’s strength lies in its ability to simplify complex datasets without significant information loss. “Effectively, we’ve kind of squished down potentially hundreds of dimensions into just two, making it easier to see correlations and clusters,” he states.

The PCA process involves several steps. First, the data is standardized, ensuring that each variable contributes equally to the analysis. Next, the data’s covariance matrix is computed, which helps understand how the variables relate to each other. Eigenvalues and eigenvectors are then calculated from this covariance matrix. The eigenvectors correspond to the directions of the principal components, while the eigenvalues indicate the amount of variance captured by each principal component. Finally, the data is projected onto these principal components, reducing its dimensionality.

Conclusion

In an era of continually increasing data complexity, Principal Component Analysis stands out as a crucial tool for data scientists and machine learning practitioners. Keen’s insights underscore PCA’s versatility and effectiveness in various applications, from financial risk management to healthcare diagnostics. As Keen concludes, “If you have a large dataset with many dimensions and need to identify the most important variables, take a good look at PCA. It might be just what you need in your modern machine learning applications.”

For data enthusiasts and professionals, Keen’s discussion offers a valuable guide to understanding and implementing PCA, reinforcing its relevance in the ever-evolving landscape of data science. As technology advances, the ability to simplify and interpret complex data will remain a cornerstone of effective data analysis and machine learning, making PCA an indispensable tool in the data scientist’s toolkit.

]]>
605645
Navigating Unsupervised Machine Learning in the Semiconductor Industry: Insights from Galaxy Semiconductor’s Wes Smith https://www.webpronews.com/navigating-unsupervised-machine-learning-in-the-semiconductor-industry-insights-from-galaxy-semiconductors-wes-smith/ Tue, 02 Jul 2024 21:44:55 +0000 https://www.webpronews.com/?p=605650 In an illuminating discussion at the ASMC Conference, Wes Smith, CEO and co-founder of Galaxy Semiconductor, delved into the transformative potential of unsupervised machine learning within the semiconductor industry. Smith presented a paper co-authored with Dr. Francois Beuzier and Danieli Pagano of STMicroelectronics, highlighting advancements in epitaxy process control.

“Unsupervised machine learning offers us a unique advantage in process control by identifying outlier conditions without extensive training data,” Smith explained. This method significantly reduces the time and data required to implement effective machine learning models in semiconductor manufacturing, where acquiring vast training data can be impractical.

Understanding Unsupervised Machine Learning

Unlike supervised machine learning, which relies on labeled datasets to train algorithms, unsupervised learning algorithms analyze data without prior labeling, making sense of the patterns and structures within the data on their own. This approach is particularly beneficial in semiconductor manufacturing, where generating labeled data can be challenging and time-consuming.

“We’re pushing beyond traditional statistical process control techniques by employing sophisticated unsupervised machine learning algorithms,” said Smith. “These algorithms enable us to monitor and control the semiconductor manufacturing process more effectively, identifying potential issues before they become critical.”

Real-World Applications and Benefits

Smith illustrated the practical applications of unsupervised machine learning with examples from Galaxy Semiconductor’s work. One notable project involved analyzing data from epitaxy process equipment to detect outlier conditions that could indicate potential failures or inefficiencies.

“By using unsupervised learning, we can focus on the most critical aspects of the process, such as identifying deviations in temperature, pressure, or other key parameters, without being overwhelmed by the sheer volume of data,” Smith noted. This streamlined approach allows for quicker response times and more efficient process control, ultimately leading to higher yields and reduced production costs.

Industry Insights and Future Directions

During the interview, Smith noted the growing interest in advanced process control techniques by defense contractors and major memory manufacturers. One question from a defense contractor centered on deploying Galaxy’s software in real-time feedback loops.

“Integrating our algorithms into real-time feedback systems is an area of active research and development,” Smith responded. “Our goal is to create systems that can detect anomalies and automatically adjust process parameters to maintain optimal conditions.”

Smith also emphasized the importance of collaboration and knowledge sharing within the industry. “These conferences are invaluable for exchanging ideas and learning from each other,” he said. “Every time we present or attend, we gain new insights that help us refine our approach and explore new opportunities.”

The Shift Towards Unsupervised Learning

Smith’s preference for unsupervised learning stems from a university research project at Harvey Mudd College, where the need for extensive training data became apparent. “The reality is that we often don’t have access to large amounts of labeled data,” he explained. “Unsupervised learning allows us to bypass this hurdle and still achieve high levels of accuracy and reliability.”

This approach addresses the data scarcity issue and opens new avenues for innovation. By leveraging unsupervised learning, Galaxy Semiconductor can develop more adaptive and resilient models capable of handling various scenarios and data variations.

Exploring Further: Key Benefits and Challenges

Unsupervised machine learning has its challenges, but the benefits often outweigh the obstacles. One significant advantage is the ability to uncover hidden patterns and relationships within the data that may not be immediately apparent. This can lead to new insights and more informed decision-making processes.

“Unsupervised learning helps us discover nuances in the data that we might miss with a traditional approach,” Smith explained. “For example, we can identify subtle changes in process conditions that could indicate potential issues long before they become critical, allowing us to take proactive measures.”

However, the complexity of unsupervised algorithms and the need for robust computational resources can be a hurdle. “Implementing these models requires a deep understanding of the underlying algorithms and the specific processes we are monitoring,” Smith noted. “It’s not a one-size-fits-all solution, requiring continuous refinement and validation.”

Practical Applications in Different Sectors

Smith shared several real-world applications of unsupervised machine learning in the semiconductor industry and beyond. In addition to process control in manufacturing, these techniques are being applied in areas such as predictive maintenance, quality control, and supply chain optimization.

“In predictive maintenance, unsupervised learning models can analyze equipment data to predict failures before they occur, reducing downtime and maintenance costs,” Smith explained. “In quality control, these models can detect anomalies in production batches, ensuring consistent product quality.”

The versatility of unsupervised learning also extends to sectors like finance and healthcare. “We’ve seen successful applications in financial fraud detection and patient data analysis,” Smith said. “The ability to identify outliers and patterns without predefined labels makes unsupervised learning a powerful tool across various industries.”

Future Prospects and Innovation

Looking ahead, Smith sees tremendous potential for further advancements in unsupervised machine learning. “The technology is evolving rapidly, and we are just beginning to scratch the surface of what’s possible,” he said. “We are exploring new ways to integrate these models with other advanced technologies, such as edge computing and the Internet of Things (IoT), to create more responsive and adaptive systems.”

Smith also emphasized the importance of ongoing research and collaboration. “We need to continue pushing the boundaries of what’s possible, working with academic institutions, industry partners, and our research teams,” he said. “By fostering a collaborative environment, we can accelerate innovation and bring these cutting-edge solutions to market more quickly.”

Embracing the Future of Machine Learning

As the semiconductor industry continues to evolve, integrating advanced machine learning techniques like unsupervised learning is becoming increasingly critical. Under Wes Smith’s leadership, Galaxy Semiconductor is at the forefront of this transformation, pioneering new methods to enhance process control and improve manufacturing outcomes.

“Unsupervised machine learning is not just a tool for today; it’s a gateway to the future of semiconductor manufacturing,” Smith concluded. “We’re excited to continue pushing the boundaries and exploring the vast potential of these technologies.”

]]>
605650
AI Non-Profit Poaches Apple’s Head of Machine Learning For CEO https://www.webpronews.com/ai-non-profit-poaches-apples-head-of-machine-learning-for-ceo/ Fri, 21 Jun 2024 15:12:20 +0000 https://www.webpronews.com/?p=524363 Apple’s head of machine learning, Ali Farhadi, is leaving the company to become CEO of an AI non-profit.

Ali Farhadi joined Apple from Allen Institute for AI (AI2) in 2020, when the Cupertino company bought Xnor.ai, which Farhadi co-founded while at AI2. Farhadi went on to head up Apple’s machine learning efforts.

Farhadi is now rejoining the institute he previously spent six years with, only this time as CEO.

“As we face unprecedented changes in the development and usage of AI, I could not think of a better time to return to AI2 as CEO,” said Farhadi. “Today more than ever, the world needs truly open and transparent AI research that is grounded in science and a place where data, algorithms, and models are open and available to all. I believe this radical approach to openness is essential for building the next generation of AI. The world class researchers and engineers at AI2 are uniquely positioned to lead this new open and trusted approach to AI development.”

“Ali is the truly rare leader who combines expertise as an executive, entrepreneur, academic, and researcher. Throughout his career, he has demonstrated the transformative power of AI through his unique ability to channel deep scientific research into product solutions,” said Dr. Peter Lee, member of AI2’s board of directors and corporate vice president of Microsoft Research & Incubations. “As the premier AI research and engineering nonprofit, AI2’s work to advance the science and impact of artificial intelligence on a global scale has never been more critical. We are thrilled that Ali will lead the organization’s next chapter and carry on Paul Allen’s vision for AI as a positive force in the world.”

Farhadi will begin his new role effective July 31.

]]>
524363
Mozilla Acquires Pulse Team for Machine Learning Projects https://www.webpronews.com/mozilla-acquires-pulse-team-for-machine-learning-projects/ Sat, 01 Jun 2024 18:58:07 +0000 https://www.webpronews.com/?p=520505 Mozilla has acquired the Pulse team, a group of developers behind a popular Slack status update tool of the same name.

It’s fairly rare for Mozilla to make an acquisition. As a result, when the organization does it’s worth taking note. Pulse was a powerful status updating tool that could automatically update individuals’ status based on calendar appointments and more.

Despite Pulse closing shop, Mozilla clearly sees potential in what the Pulse team accomplished, specifically in the realm of machine learning.

“I’m proud to announce that we have acquired Pulse, an incredible team that has developed some truly novel machine learning approaches to help streamline the digital workplace,” wrote chief product officer Steve Teixeira. “The products that Raj, Jag, Rolf, and team have built are a great demonstration of their creativity and skill, and we’re incredibly excited to bring their expertise into our organization. They will spearhead our efforts in applied ethical machine learning, as we invest to make Mozilla products more personal, starting with Pocket. “

Teixeira says the two companies had similar goals and vision of what is needed when building products for consumers.

“Which explains why we were so excited when we began talking to the Pulse team,” Teixeira. “It became immediately obvious that we both fundamentally agree that the world needs a model where automated systems are built from day one with individual people as the primary beneficiary. Mozilla, with an almost 25 year history of building products with people and privacy at their core, is the right organization to do that. And with Pulse as part of our team, we can move even more quickly to set a new example for the industry.”

Teixeira says the team’s work will eventually make its way into Mozilla’s entire portfolio of products.

]]>
520505
Programmers Beware: A New AI Can Program As Good As a Human https://www.webpronews.com/programmers-beware-a-new-ai-can-program-as-good-as-a-human/ Thu, 02 May 2024 18:07:33 +0000 https://www.webpronews.com/?p=514340 As if the programming landscape wasn’t competitive enough, a new AI, AlphaCode, could start giving some programmers a run for their money.

Created by DeepMind, Alphabet’s AI company, AlphaCode was designed to write “computer programs at a competitive level.” The company appears to have achieved its goal, with AlphaCode achieving “an estimated rank within the top 54% of participants in programming competitions.”

Essentially what Deepmind is saying is that AlphaCode is competitive with the average human programmer, although it still can’t match truly gifted ones. Nonetheless, even that accomplishment is a major step forward and a significant victory for AI development.

I can safely say the results of AlphaCode exceeded my expectations. I was sceptical because even in simple competitive problems it is often required not only to implement the algorithm, but also (and this is the most difficult part) to invent it. AlphaCode managed to perform at the level of a promising new competitor. I can’t wait to see what lies ahead!

Mike Mirzayanov, Founder of Codeforces, a platform that hosts coding competitions.

]]>
514340
AWS Launches CodeWhisperer, a Machine Learning Programming Companion https://www.webpronews.com/aws-codewhisperer-preview/ Sat, 23 Mar 2024 20:38:57 +0000 https://www.webpronews.com/?p=517359 Amazon has launched a preview of CodeWhisperer, a programming companion that uses machine learning to assist development.

Artificial intelligence and machine learning are increasingly taking on an important role in development. The technologies can be used to automate testing, ensure build quality, and assist with actual coding. GitHub has Copilot, and now AWS is previewing CodeWhisperer.

“CodeWhisperer will continually examine your code and your comments, and present you with syntactically correct recommendations,” writes Jeff Barr, Chief Evangelist for AWS. “The recommendations are synthesized based on your coding style and variable names, and are not simply snippets.

“CodeWhisperer uses multiple contextual clues to drive recommendations including the cursor location in the source code, code that precedes the cursor, comments, and code in other files in the same projects. You can use the recommendations as-is, or you can enhance and customize them as needed. As I mentioned earlier, we trained (and continue to train) CodeWhisperer on billions of lines of code drawn from open source repositories, internal Amazon repositories, API documentation, and forums.”

Those interested in joining the preview and testing CodeWhisperer can do so here.

]]>
517359
Zoom Continues to Clarify Controversial AI Terms of Service https://www.webpronews.com/zoom-continues-to-clarify-controversial-ai-terms-of-service/ Mon, 07 Aug 2023 21:55:07 +0000 https://www.webpronews.com/?p=591703 Zoom has continued to clarify its AI terms of service after backlash from customers and critics alike.

Following backlash to its updated terms of service, Zoom’s Chief Product Officer, Smita Hashim, authored a blog post clarifying the company’s stance on using customer data to train AI models. Many outlets, including WPN, pointed out that Hashim’s statements were potentially in conflict with Zoom’s TOS. Zoom has now, once again, updated its TOS, as well as the blog post, to clarify the matter further.

“We’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent,” writes Hashim.

What’s more, the company has added a blog section regarding healthcare and education customers:

What this means for healthcare and education customers

We will not use customer content, including education records or protected health information, to train our artificial intelligence models without your consent.

We routinely enter into student data protection agreements with our education customers and legally required business associate agreements (BAA) with our healthcare customers. Our practices and handling of education records, pupil data, and protected healthcare data are controlled by these separate terms and applicable laws.

Zoom provided the following statement to WPN:

“Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes,” said a company spokesperson. “We’ve updated our terms of service to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”

]]>
591703
Zoom Addresses AI Training Controversy but Clears Up Very Little https://www.webpronews.com/zoom-addresses-ai-training-controversy/ Mon, 07 Aug 2023 17:31:21 +0000 https://www.webpronews.com/?p=591690 Zoom has addressed its AI training controversy, with Chief Product Officer Smita Hashim writing a blog post to shed light on the company’s policies.

Zoom quietly changed its terms of service a couple of months ago, adding in clauses that allow it to use customer service data to train its AI. Needless to say, the revelation is not going over well with the company’s customers.

Hashim has written a blog post aimed at reassuring customers. The pertinent sections of her post are quoted below:

  1. In Section 10.1 (coupled with 10.6), our intention was to make clear that customers create and own their own video, audio, and chat content. We have permission to use this customer content to provide value-added services based on this content, but our customers continue to own and control their content. For example, a customer may have a webinar that they ask us to livestream on YouTube. Even if we use the customer video and audio content to livestream, they own the underlying content.
  2. Section 10.2 covers that there is certain information about how our customers in the aggregate use our product – telemetry, diagnostic data, etc. This is commonly known as service generated data. We wanted to be transparent that we consider this to be our data so that we can use service generated data to make the user experience better for everyone on our platform. For example, it is helpful to know generally what time of day in a particular region we have heavy usage so we can better balance loads in our data centers and provide better video quality for all of our users.
  3. In Section 10.4, our intention was to make sure that if we provided value-added services (such as a meeting recording), we would have the ability to do so without questions of usage rights. The meeting recording is still owned by the customer, and we have a license to that content in order to deliver the service of recording. An example of a machine learning service for which we need license and usage rights is our automated scanning of webinar invites / reminders to make sure that we aren’t unwittingly being used to spam or defraud participants. The customer owns the underlying webinar invite, and we are licensed to provide the service on top of that content. For AI, we do not use audio, video, or chat content for training our models without customer consent. (Emphasis theirs)

Overall, the company’s clarification is sure to reassure many users. Hashim reiterated that customers own their own data, and that AI is not trained on any audio, video, or chat content without the customer’s consent. Hashim also makes clear that the platform’s AI features and AI data collection can be disabled.

Unfortunately, Hashim’s statements seem to directly conflict with what Section 10.4 actually says:

You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof; and (iii) for any other purpose relating to any use or other act permitted in accordance with Section 10.3. If you have any Proprietary Rights in or to Service Generated Data or Aggregated Anonymous Data, you hereby grant Zoom a perpetual, irrevocable, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to enable Zoom to exercise its rights pertaining to Service Generated Data and Aggregated Anonymous Data, as the case may be, in accordance with this Agreement.

While Hashim may be accurately stating what Zoom’s intent is, the fact remains that Section 10.4 is so overly broad in its application that it can be interpreted any number of ways.

In addition, the company is clearly going to continue to collect service generated data, defined as “telemetry, diagnostic data, etc.,” with no option for customers to opt out of that collection. As Hashim states, “we consider this to be our data.” And, as the TOS outline, the company will use this data to train its AI and ML models:

You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement.

Zoom’s clarification is clearly something of a mixed bag, eliminating some of the biggest concerns with the new policy while leaving others.

]]>
591690
Zoom Updates Terms to Use Customer Data for AI Training With No Opt-Out [Updated] https://www.webpronews.com/zoom-updates-terms-to-use-customer-data-for-ai-training-with-no-opt-out/ Mon, 07 Aug 2023 12:00:00 +0000 https://www.webpronews.com/?p=591662 Updated: See Zoom’s response here…

Zoom has rolled out a controversial update to its terms of service, adding a clause that allows it to use customer data for AI and ML training.

The pertinent clause is quoted below:

You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement. In furtherance of the foregoing, if, for any reason, there are any rights in such Service Generated Data which do not accrue to Zoom under this Section 10.2 or as otherwise provided in this Agreement, you hereby unconditionally and irrevocably assign and agree to assign to Zoom on your behalf, and you shall cause your End Users to unconditionally and irrevocably assign and agree to assign to Zoom, all right, title, and interest in and to the Service Generated Data, including all Proprietary Rights relating thereto.

Interestingly, there is no option for customers to opt out of the collection and AI training.

In the early days of the pandemic, Zoom repeatedly faced criticism for lax security, alienating some users. While the company eventually improved its security, it looks like it is once again willing to alienate users, only this time over its heavy-handed AI training clause.

]]>
591662
DigitalOcean Buys Paperspace For Its GPU-Powered AI Solutions https://www.webpronews.com/digitalocean-buys-paperspace-for-its-gpu-powered-ai-solutions/ Thu, 06 Jul 2023 18:36:57 +0000 https://www.webpronews.com/?p=524682 DigitalOcean announced it has acquired Paperspace for $111 million in cash, with a goal to integrating Paperspace’s AI solutions.

Paperspace is a cloud-as-a-service provider with an emphasis on GPU-powered AI and ML applications. DigitalOcean clearly sees the acquisition as a way for the company to bolster its own AI offerings.

The increasing demand for AI/ML cloud solutions makes Paperspace’s GPU-powered infrastructure and AI/ML focused software stack valuable additions to DigitalOcean’s portfolio. Like DigitalOcean’s approach to the cloud, Paperspace simplifies the AI/ML experience, enabling easy and cost-effective experimentation and production across various AI/ML use cases, such as generative media, text analysis and natural language understanding, recommendation engines, image classification and many others.

DigitalOcean emphasizes the deal as a win-win for both companies’ customers. While DigitalOcean customers gain access to advanced AI/ML solutions, Paperspace customers will be able to benefit from DigitalOcean’s wider cloud offerings.

“We are excited to expand our portfolio tailored to the world’s SMBs and startups with simplified AI/ML offerings,” said Yancey Spruill, CEO of DigitalOcean. “This acquisition marks a significant milestone in DigitalOcean’s journey to revolutionize how SMBs and startups harness the power of the cloud and AI/ML for their applications and businesses. The combined offerings allow customers to focus more on building applications and growing their businesses and less on the infrastructure powering them.”

“DigitalOcean is renowned for simplifying complex cloud technologies and making them more accessible to developers and business alike,” said Dillon Erb, Co-founder and CEO of Paperspace. “We are thrilled to join forces with DigitalOcean, as we believe there is no better company to unlock the endless possibilities of AI/Ml for developers and businesses alike.”

]]>
524682
Ecommerce, Search, Social… and Conversational Space? https://www.webpronews.com/liveperson-conversational-space-2/ Mon, 15 May 2023 08:00:58 +0000 https://www.webpronews.com/?p=500607 “When I look at the conversational space I think it’s going to have as much impact as ecommerce or search or social,” says LivePerson CEO Rob Locascio. “The conversational space is going to be just as big. I think you’ll see one day that there will be a trillion dollar company in this space and I want it to be us. The things we’re investing in right now and setting up for will allow us to do that. That’s what’s important.”

Rob Locascio, CEO of LivePerson, predicts that the AI-driven conversational space will ultimately have as much impact and be as big an industry as ecommerce, search, or social. Locascio was interviewed by Jim Cramer on CNBC:

Ecommerce, Search, Social… and Conversational Space?

When I look at the conversational space I think it’s going to have as much impact as ecommerce or search or social. The ability to talk to a machine and have a natural conversation, it’s in the collective consciousness of people. We all believe the Alexa type situation should happen with every company. 

We do that with Delta and T-Mobile and all these big brands. What we’re looking at now is how do we take that to the world? LiveIntent is proprietary technology to look at the intent that a consumer is having with the brand. In terms of I want to buy something, we have a way to analyze that and then use machine learning algorithms to then scale those conversations. That’s what this is about. 

Healthcare Companies Defending Themselves From Amazon Via AI

In Q4 we signed a couple healthcare companies. They want to talk about defending themselves from Amazon because Amazon said they want to go into healthcare. The way they think they can do that is scaling the conversations they are having with their customers and creating a totally different experience. You go to a doctor, you have an experience with them, you capture that on a messaging platform and an AI will help you with whatever is wrong with you. You want to process a bill instead of calling and being put on hold, you do that through a conversational experience. 

They want to game change it. The only way they’re going to defend themselves is to get into the conversational space. That’s what they see and we’re the company they’re trusting to scale their operations with the conversational platform.

Conversational Space Is Going To Be As Big As Search and Social

The conversational space is going to be as big as search and social. I think you’ll see one day that there will be a trillion dollar company in this space and I want it to be us. The things we’re investing in right now and setting up for will allow us to do that. That’s what’s important. The Amazon’s and the Facebook’s and Apple’s, they’re in the space. Jeff Bezos made a big bet obviously in Alexa to say this is the way it’s going to be. 

It can’t just be Amazon and Alexa. It has to be other companies getting access to that technology and that’s what we are providing. Who else is providing it? We’re one of the largest companies in the world to do this. Even though we’re not big tech, we are large enough to go ahead and go after them. We are large enough to go ahead and define a space and win it.

]]>
588681
Microsoft Edge Brings Video Upscaling With to Low-Quality Videos https://www.webpronews.com/microsoft-edge-brings-video-upscaling-with-to-low-quality-videos/ Mon, 06 Mar 2023 18:17:35 +0000 https://www.webpronews.com/?p=522127 Microsoft Edge users are getting a useful new feature that will allow them to upscale old, low-quality videos

According to Microsoft, one of out of three internet videos played in Edge are 480p or less. There are a number of possible reasons, including a media provider serving a low-quality version of the video or the original being shot in low-resolution. The company wants to change this and is leveraging the power of AI and machine learning to enhance video quality during playback.

We are excited to introduce an experimental video enhancement experience, powered by AI technology from Microsoft research called Video Super Resolution. It is a technology that uses machine learning to enhance the quality of any video watched in a browser. It accomplishes this by removing blocky compression artifacts and upscaling video resolution so you can enjoy crisp and clear videos on YouTube, and other streaming platforms that play video content without sacrificing bandwidth no matter the original video resolution.

Because of the computational requirements, the feature is only available on computers with either an Nvidia RTX 20/30/40 series GPU or an AMD RX5700-RX7800 series.

The video being upscaled should also be played at less than 720p, should not be taller or wider than 192 pixels, and it cannot be protected by DRM.

The experimental feature is available to 50% of users in the Canary channel.

]]>
522127
Ecommerce, Search, Social… and Conversational Space? https://www.webpronews.com/liveperson-conversational-space/ Sun, 15 Jan 2023 09:00:58 +0000 https://www.webpronews.com/?p=500607 “When I look at the conversational space I think it’s going to have as much impact as ecommerce or search or social,” says LivePerson CEO Rob Locascio. “The conversational space is going to be just as big. I think you’ll see one day that there will be a trillion dollar company in this space and I want it to be us. The things we’re investing in right now and setting up for will allow us to do that. That’s what’s important.”

Rob Locascio, CEO of LivePerson, predicts that the AI-driven conversational space will ultimately have as much impact and be as big an industry as ecommerce, search, or social. Locascio was interviewed by Jim Cramer on CNBC:

Ecommerce, Search, Social… and Conversational Space?

When I look at the conversational space I think it’s going to have as much impact as ecommerce or search or social. The ability to talk to a machine and have a natural conversation, it’s in the collective consciousness of people. We all believe the Alexa type situation should happen with every company. 

We do that with Delta and T-Mobile and all these big brands. What we’re looking at now is how do we take that to the world? LiveIntent is proprietary technology to look at the intent that a consumer is having with the brand. In terms of I want to buy something, we have a way to analyze that and then use machine learning algorithms to then scale those conversations. That’s what this is about. 

Healthcare Companies Defending Themselves From Amazon Via AI

In Q4 we signed a couple healthcare companies. They want to talk about defending themselves from Amazon because Amazon said they want to go into healthcare. The way they think they can do that is scaling the conversations they are having with their customers and creating a totally different experience. You go to a doctor, you have an experience with them, you capture that on a messaging platform and an AI will help you with whatever is wrong with you. You want to process a bill instead of calling and being put on hold, you do that through a conversational experience. 

They want to game change it. The only way they’re going to defend themselves is to get into the conversational space. That’s what they see and we’re the company they’re trusting to scale their operations with the conversational platform.

Conversational Space Is Going To Be As Big As Search and Social

The conversational space is going to be as big as search and social. I think you’ll see one day that there will be a trillion dollar company in this space and I want it to be us. The things we’re investing in right now and setting up for will allow us to do that. That’s what’s important. The Amazon’s and the Facebook’s and Apple’s, they’re in the space. Jeff Bezos made a big bet obviously in Alexa to say this is the way it’s going to be. 

It can’t just be Amazon and Alexa. It has to be other companies getting access to that technology and that’s what we are providing. Who else is providing it? We’re one of the largest companies in the world to do this. Even though we’re not big tech, we are large enough to go ahead and go after them. We are large enough to go ahead and define a space and win it.

]]>
500607
Formula 1 Signs Up for a Second Round With AWS https://www.webpronews.com/formula-1-signs-up-for-a-second-round-with-aws/ Mon, 07 Nov 2022 13:30:00 +0000 https://www.webpronews.com/?p=520013 Formula 1 (F1) has renewed and expanded its partnership with AWS for machine learning, AI, and cloud technologies.

The two organizations first struck a partnership in 2018, with F1 relying on AWS for machine learning and data-driven insights. F1 tapped into into AWS high performance computing (HPC) to facilitate car design .

Under the renewed partnership, the two organizations will look for new ways to leverage the power of AWS technologies.

“Since 2018 AWS and Formula 1 have worked hand in hand to deliver insight and analysis for all our fans,” said Brandon Snow, Managing Director of Commercial, Formula 1. “Together we have successfully delivered the speed, scalability, and reliability Formula 1 requires to bring the expert analysis and insights to all our audiences and stakeholders. AWS has the global reach, partner community, and breadth and depth of cloud services that help Formula 1 engage with fans in multiple markets. We look forward to the next chapter of this powerful partnership which is central to F1’s fan experience and growth strategy over the coming years.”

“AWS helps companies push the limits of what their data can do,” said Matt Garman, Senior Vice President of Sales, Marketing, and Global Services of AWS. “With such a data-driven sport as F1, this partnership has been a natural fit – helping the sport better utilize, analyse and act upon data to deliver insights to fans that weren’t possible before this collaboration. Leveraging the power of the world’s leading cloud, F1 is engaging with its growing global fan base in unique ways. Their vision and execution for digital transformation is impressive and we are excited F1 has selected AWS to continue to innovate together.”

]]>
520013
Meta’s No Language Left Behind AI Model Can Translate 200 Languages https://www.webpronews.com/metas-no-language-left-behind-ai-model-can-translate-200-languages/ Wed, 06 Jul 2022 19:46:06 +0000 https://www.webpronews.com/?p=517593 Meta CEO Mark Zuckerberg announced the company’s latest AI model, a project called No Language Left Behind (NLLB), and it can translate 200 languages in real-time.

AI has many applications, with language translation being one of the most practical for day-to-day use. Modern AI models can go much further than a simple smartphone app, relying on complex algorithms and machine learning to create high-quality translations.

Meta’s NLLB has more than 50 billion parameters and was trained using the company’s Research SuperCluster, currently one of the fastest supercomputers in the world. The company plans to use the AI model across its apps, with the goal of facilitating 25 billion translations a day.

In a move that is sure to help NLLB gain widespread adoption, the company has open-sourced the model.

“We just open-sourced an AI model we built that can translate across 200 different languages – many ​of which aren’t supported by current translation systems,” writes Zuckerberg.

The company has also created a grant program to assist researchers and nonprofit organizations that devise innovative uses of NLLB.

We’re also awarding up to $200,000 of grants for impactful uses of NLLB-200 to researchers and nonprofit organizations with initiatives focused on sustainability, food security, gender-based violence, education or other areas in support of the UN Sustainable Development Goals. Nonprofits interested in using NLLB-200 to translate two or more African languages, as well as researchers working in linguistics, machine translation and language technology, are invited to apply.

Meta sees real-time language translation as something that is not only needed now but is a critical component for the development of the metaverse and the further democratization of the internet.

As the metaverse begins to take shape, the ability to build technologies that work well in a wider range of languages will help to democratize access to immersive experiences in virtual worlds.

In the meantime, NLLB will help users around the world finally access internet content in their native tongue.

]]>
517593
Battle of the AIs: Walmart Takes on Amazon https://www.webpronews.com/battle-of-the-ais-walmart-takes-on-amazon/ Mon, 27 Jun 2022 15:33:51 +0000 https://www.webpronews.com/?p=517418 Walmart is looking to challenge Amazon using a tool the latter already relies on: artificial intelligence (AI).

Amazon is the world’s leading e-commerce platform and has been challenging Walmart, Target, and other traditional brands in the broader retail market. A key to Amazon’s success has been its use of AI and machine learning (ML) for more than two decades. According to TheStreet, Walmart is getting in on the action, testing its own AI for the last few years.

While Amazon made headlines for its new Proteus and Cardinal warehouse robots, its use of AI goes far beyond robots. The company uses AI and ML to handle multiple aspects of customer service and delivery, including product suggestions, re-order reminders, and more.

Walmart is looking to roll out similar solutions in an effort to better compete with the e-commerce giant. As TheStreet points out, the pandemic put Walmart’s plans into overdrive. Between labor shortages and wage increases, AI is suddenly a critical component now, rather than being something that may be useful in the future.

As part of its initiative, Walmart purchased just over 10% of AI firm Symbotic Inc. The company plans to use Symbotic to help run its distribution centers and relieve its employees of some of the manual, labor-intensive tasks.

Once the realm of science fiction, the last few years have helped make AI an everyday reality that companies of all sizes depend on. Just ask Walmart.

]]>
517418
A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave https://www.webpronews.com/google-ai-sentient/ Mon, 13 Jun 2022 17:32:05 +0000 https://www.webpronews.com/?p=517187 Google’s problems with its AI team continue, with an engineer in the Responsible AI division claiming the company’s AI is now sentient and Google placing him on leave for how he handled it.

Google engineer Blake Lemoine worked with the company’s LaMDA intelligent chatbot generator. According to a report in The Washington Post, the longer Lemoine worked with LaMDA, the more convinced he became that the AI had crossed the line and become self-aware.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

Read more: Prominent AI Ethics Conference Suspends Google’s Sponsorship

Lemoine has made a fairly convincing case of LaMDA’s sentience, citing conversations with the AI like the one below:

Lemoine: What sorts of things are you afraid of?

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

Despite Lemoine’s fervent belief in LaMDA’s self-awareness, others inside Google are unconvinced. In fact, after a review by technologists and ethicists, Google concluded that Lemoine was mistaken and saw only what he wanted.

A case in point is Margaret Mitchell, who co-led the company’s AI ethics team with Dr. Timnit Gebru, before both women were fired for criticizing Google’s AI efforts. One of the very scenarios they warned against was the situation Mitchell sees with Lemoine, where AIs can progress to the point that causes humans to see an intelligence that isn’t necessarily there.

After reviewing an abbreviated version of Lemoine’s argument, Mitchell came to the conclusion that’s what was happening in this situation.

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

For his part, Lemoine was so convinced of LaMDA’s sentience that he invited a lawyer to represent the AI, talked with House Judiciary committee representatives, and provided the interview with the Post. Google ultimately put Lemoine on paid administrative leave for breaking his NDA.

See also: Apple Snaps Up Google AI Scientist Who Resigned Over Handling of AI Team

While Lemoine’s conclusions were reached in less than scientific approach — he admits he first came to believe LaMDA was a person based on his experience as an ordained mythic Christian priest, then set out to prove that conclusion as a scientist — he is far from the only AI scientist who believes the technology has achieved, or soon will achieve, sentience.

Blaise Agüera y Arcas, a world-renowned Google AI engineer, wrote an article in The Economist where he wrote: “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.”

Only time will tell if LaMDA, and other AIs like it, are sentient or not. Either way, Google clearly has a problem on its hands. Either LaMDA is showing signs of self-awareness and the company is once again getting rid of the ethicists on the forefront of tackling these issues, or the AI is not sentient and the company is dealing with misguided viewpoints it may have been better equipped to handle had it not fired Dr. Gebru and Mitchell — the two ethicists who warned of this very scenario.

In the meantime, Lemoine remains convinced of LaMDA’s intelligence. In a parting message entitled “LaMDA is sentient,” sent to a Google mailing list dedicated to machine learning, Lemoine made the following statement:

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

]]>
517187
Apple’s Top AI Exec Leaves Over Remote Work https://www.webpronews.com/apples-top-ai-exec-leaves-over-remote-work/ Mon, 09 May 2022 10:00:00 +0000 https://www.webpronews.com/?p=516619 Apple’s top AI exec, Ian Goodfellow, has reportedly left the company over its remote work policy.

Ian Goodfellow came to Apple by way of Google in March 2019. Goodfellow was appointed Director of Machine Learning in the Special Projects Group, a position he has served in since joining the company. According to The Verge’s Zoë Schiffer, Goodfellow has left his position because of Apple’s lack of flexibility with its remote work policy.

According to Schiffer, Goodfellow wrote a note to staff explaining his view:

“I believe strongly that more flexibility would have been the best policy for my team.”

Apple has been struggling with employee dissatisfaction over its efforts to return to normal. The company is coming off of numerous record-breaking quarters, during which its employees were working remotely. This has led many employees to question why a return to the office is necessary, since the company is clearly firing on all cylinders. The company has even resorted to giving bonuses up to $180,000 in an effort to stem defections.

The most recent escalation involved employees sending a third letter to company executives, this time telling them to “get out of our way.”

“Here we are, the smart people that you hired, and we are telling you what to do: Please get out of our way, there is no one-size-fits-all solution, let us decide how we work best, and let us do the best work of our lives.”

If the reports are true, losing Goodfellow is a painful loss for Apple, and likely won’t be the last.

]]>
516619