IT Management https://www.webpronews.com/technology/ Breaking News in Tech, Search, Social, & Business Wed, 11 Sep 2024 00:34:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 IT Management https://www.webpronews.com/technology/ 32 32 138578674 K1 Acquires MariaDB, Taking Company Private https://www.webpronews.com/k1-acquires-mariadb-taking-company-private/ Wed, 11 Sep 2024 00:34:15 +0000 https://www.webpronews.com/?p=607818 K1 Investment Management announced it has completed its acquisition of MariaDB, the popular database that runs on servers, computers, and mobile devices around the world.

MariaDB is a fork of MySQL, and came into being after Oracle acquired MySQL. MariaDB, the company, provides commercial services and support to organizations using the database. The US Department of Defense, RedHat, Samsung, ServiceNow, and Deutsche Bank are just a few of the major organizations that rely on MariaDB.

“To run our strategic risk platform at Deutsche Bank, we needed a database that was reliable and performant while handling a massive amount of data. That’s why we turned to MariaDB,” said Liang Ma, Director Core Strats at Deutsche Bank. “With MariaDB Enterprise Server, we have a database that delivers the stability we need at a fraction of the cost of proprietary alternatives.”

MariaDB went public in late-2022, but has struggled to deliver. As part of the deal, K1 has appointed a new CEO, Rohit de Souza, to oversee the company’s turnaround and expansion.

“We are thrilled to welcome MariaDB to the K1 portfolio and to have Rohit leading the company into its next phase of growth,” said Sujit Banerjee, Managing Director of K1 Operations, LLC. “Together, we aim to accelerate product innovation and continue MariaDB’s mission of delivering high-quality, enterprise-grade solutions to meet the growing demands of the market.”

“With K1’s support, we are poised to expand our capabilities and continue delivering the innovative database solutions our customers rely on,” said Rohit de Souza, CEO at MariaDB. “This partnership allows us to further product innovation, advancing our ability to support new workloads driven by AI and the cloud. We remain focused on making it easier for customers to transition from costly alternatives and meet the rapidly growing demands for AI and cloud-based solutions.”

MariaDB is K1’s third take-private deal.

]]>
607818
Microsoft and Oracle Announce Oracle Database@Azure Enhancements https://www.webpronews.com/microsoft-and-oracle-announce-oracle-databaseazure-enhancements/ Tue, 10 Sep 2024 13:08:07 +0000 https://www.webpronews.com/?p=607779 Microsoft and Oracle are expanding their partnership, enhancing their Oracle Database@Azure solution the two companies announced a year ago.

Oracle Database@Azure is a service that allows businesses to run their Oracle Cloud Infrastructure (OCI) services in Microsoft Azure datacenters. The option gives organizations the ability to combine the two companies’ services in a low-latency, high-performance environment.

The latest enhanced include the following features, per a Microsoft Azure blog post.

  • Microsoft Fabric plus Oracle Database@Azure integration to fuel customer data and AI innovations.
  • Integration with Microsoft Sentinel and compliance certifications to provide industry-leading security and compliance for mission-critical workloads.
  • Plans to expand offering to a total of 21 primary regions, each with at least two availability zones and support for Oracle’s Maximum Availability Architecture (MAA) to deliver the highest levels of availability and resilience.
  • Examples from customers joining us on stage at Oracle CloudWorld this week.

Microsoft says customers are choosing Oracle Database@Azure because of the “performance, scalability, security, and reliability” it offers, with the solution becoming a popular choice for mission-critical workloads for some of the world’s largest companies.

“With the continuing threat of ransomware, companies must adapt to rebuild their critical services and systems from scratch—not just reconstitute data into an environment that is compromised. Oracle Database@Azure is the only service that meets MSCI’s Cyber DR needs from a recovery time, security isolation, recovery point and cost perspective.”—John Rogers, Chief Information Security Officer, MSCI.

Microsoft says its new Microsoft Fabric works with Oracle Database@Azure to help drive data and AI innovation.

Microsoft Fabric is an ever-evolving, AI-powered data analytics platform that empowers customers to unify and future-proof their data estate. Fabric keeps up with the trends and seamlessly integrates each new capability so businesses can spend less time integrating and managing their data estate and more time unlocking value from their data. OCI GoldenGate offers seamless support to integrate data across dozens of data sources and targets including OneLake in Microsoft Fabric, delivering enterprise-grade, real-time data to the Microsoft ecosystem. The combination of OCI GoldenGate’s continuous, low-latency data availability in Microsoft Fabric’s comprehensive data and analytics tools, like Power BI and Copilot, enables customers to connect their essential data sources—both Oracle and non-Oracle—to drive better insights and decision-making.

Microsoft and Oracle have been making significant inroads in the cloud industry. Oracle, in particular, has become a popular option for AI workloads, thanks to its more modern cloud architecture. The enhanced integration with Microsoft Azure is sure to benefit both companies moving forward.

]]>
607779
IBM Buying Accelalpha, a Top Oracle Services Provider https://www.webpronews.com/ibm-buying-accelalpha-a-top-oracle-services-provider/ Tue, 10 Sep 2024 11:30:00 +0000 https://www.webpronews.com/?p=607769 IBM announced its latest acquisition, saying it has struck a deal to acquire Accelalpha, a top Oracle services provider.

IBM has been steadily transforming itself into a leading hybrid and multi-cloud cloud provider, snapping up a number of startups and companies as it works toward that goal. Accelalpha is the latest acquisition, expanding IBM’s Oracle support.

Accelalpha has a long history of supporting Oracle, and has chalked up some notable firsts.

Accelalpha’s consultants bring expertise across the Oracle Cloud Applications Suite including Oracle Supply Chain Management (SCM) and Logistics, Oracle Cloud Enterprise Resource Planning (ERP), Oracle Cloud Enterprise Performance Management (EPM), Oracle Cloud Customer Transformation (CX), and Oracle Configure, Price, Quote (CPQ). As an Oracle Cloud Excellence Certified Implementer, Accelalpha boasts the largest Oracle logistics practice globally and was the first Oracle partner to implement Oracle Fusion Financials. Since its founding in 2009, Accelalpha has grown organically and through acquisition. Notable past acquisitions include Prolog Partners, Key Performance Ideas, LogistiChange and Frontera Consulting.

“Many enterprises depend on Oracle to run the workflows that are at the heart of their enterprise,” said Kelly Chambliss, Senior Vice President, IBM Consulting, Americas. “With our acquisition of Accelalpha, IBM will be even better positioned to help our clients deploy and manage Oracle solutions, including generative AI and cloud technology, for competitive advantage.”

“IBM’s client and employee-centric culture and long-established scale and reach in more than 175 countries is a great fit for the next stage of our growth,” said Nat Ganesh, CEO, Accelalpha. “We’re thrilled to bring our expertise in Oracle Cloud solutions and targeted domain and industry knowledge to bear together with IBM’s strength in generative AI and hybrid cloud. With Accelalpha’s history of being a pioneer in Oracle Cloud and IBM’s deep-rooted dedication to innovation that matters, we can further accelerate value creation for our clients.”

IBM did not disclose the terms of the deal.

]]>
607769
Kaspersky Offloads US Customers to Ultra AV https://www.webpronews.com/kaspersky-offloads-us-customers-to-ultra-av/ Sat, 07 Sep 2024 16:41:56 +0000 https://www.webpronews.com/?p=607643 Kaspersky is offloading its US antivirus customers following a ban on its software, reaching a deal with Pango Group to migrate them to its Ultra AV.

US lawmakers banned Kaspersky in the US in June, citing “undue and unacceptable risks” as a result of the company’s ties to the Kremlin.

“The case against Kaspersky Lab is overwhelming,” Senator Jeanne Shaheen said the time. “The strong ties between Kaspersky Lab and the Kremlin are alarming and well-documented.”

Kaspkery said it would wind down its US business in response, saying the ban ensured “business opportunities in the country are no longer viable.”

According to Axios, Kaspersky has reached a deal with Pango that will at least see customers continue to receive support and software updates to their antivirus software.

“The good news is that there’s really no action required by customers,” Pango CEO Neill Feather told the outlet.

“Those things that they do need to be aware of and need to know, we’ll lay out for them in a series of email communications and then we also have our customer support team ramped up and ready to assist,” he added.

While any forced software migration is always difficult, especially when it comes unexpectedly, it appears that Kaspersky and Pango are doing what they can ease the transition as much as possible.

]]>
607643
Salesforce Acquiring Own Company https://www.webpronews.com/salesforce-acquiring-own-company/ Fri, 06 Sep 2024 22:15:17 +0000 https://www.webpronews.com/?p=607621 Salesforce announced it is Acquiring Own Company, a leader in data protection and management.

Salesforce will pay approximately $1.9 billion in cash for Own, saying the company’s expertise and portfolio will help Salesforce enhance its data protection offerings.

“Data security has never been more critical, and Own’s proven expertise and products will enhance our ability to offer robust data protection and management solutions to our customers,” said Steve Fisher, President and GM, Einstein 1 Platform and Unified Data Services. “This proposed transaction underscores our commitment to providing secure, end-to-end solutions that protect our customers’ most valuable data and navigate the shifting landscape of data security and compliance.”

“We’re excited to join forces with Salesforce, a company that shares our commitment to data resilience and security,” said Sam Gutmann, Own CEO. “As digital transformation accelerates, our mission has expanded from preventing data loss in the cloud to helping customers protect their data, unlock business insights, and accelerate AI-driven innovation. Together with Salesforce, we’ll deliver even greater value for our customers by driving innovation, securing data, and ensuring compliance in the world’s most complex and highly regulated industries.”

Own has established itself as a leader in its field, and has almost 7,000 customers. Salesforce says Own’s products and services will compliment its existing ones.

The acquisition comes at a time when customers are increasingly focused on mitigating data loss due to system failures, human error, and cyberattacks. The advent of AI has made customers even more aware of the need to protect and manage access to data. By investing more deeply in pure cloud-native data protection solutions, Salesforce aims to accelerate the growth of its Platform Data Security, Privacy, and Compliance products.

Own’s capabilities will complement Salesforce’s existing offerings, such as Salesforce Backup, Shield, and Data Mask. This will enable Salesforce to offer a more comprehensive data protection and loss prevention set of products, further reinforcing its commitment to providing secure, end-to-end solutions. These solutions are essential for protecting customers’ most valuable assets—their data—and for deriving the most value from their historical data by leveraging AI to understand trends and forecast future growth.

Salesforce has made a number of strategic acquisitions in the last several years, the largest being Slack. The company had slowed its pace of acquisitions last year, but this latest deal could indicate the company is returning to its former pace.

]]>
607621
Verizon CEO: Frontier Deal to Boost Broadband Reach, ‘A Game Changer for Consumers’ https://www.webpronews.com/verizon-ceo-frontier-deal-to-boost-broadband-reach-a-game-changer-for-consumers/ Fri, 06 Sep 2024 12:59:52 +0000 https://www.webpronews.com/?p=607595 In a significant move to expand its broadband capabilities, Verizon Communications has announced a $20 billion acquisition of Frontier Communications, a deal that CEO Hans Vestberg says will enhance Verizon’s premium broadband services and create more choices for consumers. Speaking exclusively on Squawk Box, Vestberg emphasized that the deal will “accelerate” Verizon’s broadband efforts, helping the company bring its fiber network and wireless services to millions more households across key markets.

“This is a great opportunity for consumers,” Vestberg said. “By acquiring Frontier, we’re extending the reach of our premium fiber network while connecting it with our wireless offerings. It’s about providing better services, faster speeds, and more options for customers.”

Expanding Fiber Footprint and Enhancing Consumer Experience

The acquisition gives Verizon access to Frontier’s vast network of fiber infrastructure, which spans across 24 states and Washington D.C., covering over seven million homes with plans to expand to 10 million by the time the deal closes in early 2025. According to Vestberg, the decision to acquire Frontier rather than continuing to build out Verizon’s infrastructure from scratch was both economical and strategic.

“It was a fairly simple decision between building and buying,” Vestberg explained. “The economics were appealing, especially considering the significant transformation Frontier has undergone in recent years. Today, more than 50% of their revenue comes from fiber services, which made this a compelling asset for us.”

Frontier, originally part of Verizon before being sold off a decade ago, has re-emerged as a key player in the fiber broadband space, with a strong focus on high-speed internet services. Verizon’s acquisition will enable the telecom giant to increase its fiber reach by 50%, adding 15 million new homes to its network, a leap from its current 30 million homes.

A Strategic Fit in a Competitive Market

Vestberg was quick to address concerns about potential regulatory hurdles and antitrust scrutiny, given the scale of the deal. “We believe this is great for consumers, and we’re confident that regulatory approval will go through,” he said, dismissing early skepticism from investors. “This deal provides more competition in the broadband market, especially in areas where we don’t have significant overlap with Frontier’s footprint.”

The Verizon CEO highlighted that the acquisition would allow the company to offer not just fiber broadband, but also Verizon’s broader suite of services, including fixed wireless access and its My Home plans. The latter includes perks such as streaming services and home insurance, adding further value to the consumer proposition.

“We’re not just bringing faster internet; we’re bringing better solutions for how people live and work today,” Vestberg said. “By offering fiber, fixed wireless, and premium packages all in one, we’re giving customers more control and more choice.”

Competition and Broader Implications

Verizon’s move to acquire Frontier also positions the company to compete more aggressively with AT&T, a rival with a larger fiber footprint. Although the deal will significantly boost Verizon’s fiber network, analysts note that it still leaves the telecom company covering less than 20% of U.S. households with fiber-based home broadband—a gap that Vestberg acknowledged.

“We are well aware that there’s still room for growth, but with this acquisition, we’re in a unique position to lead,” he said. “With our top-tier wireless services and the expansion of fiber, we’re aiming to meet customers where they are, whether it’s urban, suburban, or even rural areas.”

Vestberg also pointed out that the deal marks a significant shift in how Verizon views the broadband landscape, noting that it’s not just traditional fiber competitors they’re up against. “We’re also competing with emerging technologies like satellite internet providers, such as Starlink, and other cable players. This acquisition gives us a broader platform to stay competitive.”

Financial Outlook and Future Plans

On the financial front, Vestberg reassured shareholders that the acquisition will pay off in the long run. “From day one, this deal will contribute to revenue growth, and within 12 months, it will positively impact EBITDA, cash flow, and earnings per share,” he said. “Our balance sheet is strong, and while our leverage will increase slightly, it’s manageable considering the size of Verizon’s equity.”

As for Verizon’s long-term strategy, the CEO emphasized that the company remains focused on its core mission: building a network that can support multiple profitable connections. “Mobility and broadband are essential for every individual and organization. This acquisition is part of our broader strategy to ensure that we’re leading in both,” Vestberg said.

In the meantime, Verizon will continue executing its plans for mobility and fixed wireless services while Frontier proceeds with its goal of passing 10 million homes with fiber.

Deal Accelerates Everything

If approved, the $20 billion acquisition of Frontier by Verizon marks a key moment in the company’s strategy. With its increased fiber footprint, Verizon will strengthen its already strong position in the highly competitive broadband market, offering faster speeds, better services, and more options for millions of new customers.

“This deal accelerates everything we’ve been working toward,” Vestberg concluded. “It’s not just about expanding our network; it’s about providing a superior experience for consumers who increasingly depend on fast, reliable internet for their daily lives.”

]]>
607595
Massive Disney Data Breach Exposes Financial Secrets and Personal Info https://www.webpronews.com/massive-disney-data-breach-exposes-financial-secrets-and-personal-info/ Fri, 06 Sep 2024 08:36:27 +0000 https://www.webpronews.com/?p=607567 More details have emerged about the cyberattack that hit Disney earlier this summer, one of the largest corporate data breaches in recent years. A hacker group known as Nullbulge leaked over 1.1 terabytes of sensitive data, including internal financial details, personal information about employees and customers, and login credentials to cloud systems. The breach, which exposed vast information, highlights significant cybersecurity vulnerabilities even in large corporations with vast resources, such as Disney.

Documents reviewed by The Wall Street Journal show that the leaked data includes internal financial figures, detailed Slack communications, and confidential information about Disney’s theme park strategies and streaming services. The leak sheds light on Disney’s operations and raises important questions about the adequacy of corporate cybersecurity measures and insider threat management.

Critical Financial Information Exposed

Among the most significant revelations in the breach are the financial details regarding two of Disney’s major revenue streams: Disney+ and the Genie+ premium park pass. Internal documents showed that Disney+ generated over $2.4 billion in revenue during the first quarter of 2024, making up approximately 43% of the company’s direct-to-consumer revenue. This granular level of detail is rarely disclosed in Disney’s public financial filings and offers new insight into the performance of its streaming services.

Another key revenue driver, Disney’s Genie+ park pass system, also saw its internal financials exposed. According to the documents, Genie+ generated over $724 million in pretax revenue between its launch in October 2021 and June 2024 at Walt Disney World alone. “These numbers underscore just how crucial Genie+ and Disney+ have become for the company’s financial health,” said an industry analyst. “Having this data out in the open makes Disney vulnerable to competitive insights.”

Personal Data Compromised

The breach also compromised personal data, including Disney Cruise Line employees’ passport numbers, visa details, and home addresses. This has heightened concerns about identity theft and the broader implications of such sensitive information falling into the wrong hands. A separate spreadsheet contained Disney Cruise passengers’ names, addresses, and contact information, further escalating privacy concerns.

“Data breaches like this are becoming more common, but the scale and sensitivity of this one make it particularly troubling,” said cybersecurity expert Steve Layne, CEO of Insider Risk Management. “When personal details like passport numbers and home addresses are exposed, it creates an immense risk not just for the company but for every individual involved.”

Disney’s response to the breach has been carefully measured. A spokesperson stated, “We decline to comment on unverified information The Wall Street Journal has purportedly obtained as a result of a bad actor’s illegal activity.” Still, Disney assured investors in its August regulatory filing that the breach had not materially impacted its financial performance. However, experts warn that the long-term fallout could be significant.

The Role of Insider Threats in Cybersecurity Failures

One of the most alarming aspects of the breach is how it occurred. Nullbulge claimed they gained access by compromising a single employee’s device—specifically by accessing Slack cookies- allowing them to infiltrate Disney’s internal communications systems. This method highlights the growing importance of mitigating insider threats, which can result from accidental, negligent, or malicious actions.

“Insider threats only come in three forms: accidental, negligent, or malicious human behavior,” Layne explained. “In this case, adversaries targeted a software development manager, which gave them access to a treasure trove of highly confidential data. Companies need to invest more in insider risk programs that could prevent incidents like this from happening.”

Experts in cybersecurity agree that insider threats remain one of the most difficult attack vectors to defend against. Khwaja Shaik, IBM’s CTO and a board advisor on digital transformation, warned that “the question isn’t whether your organization will face a breach, but how prepared you are to respond and protect your most valuable asset: trust.”

Shaik elaborated on the growing sophistication of cyberattacks, noting, “Traditional hacking methods are giving way to more advanced techniques, such as inference attacks, which exploit known data to infer sensitive information without directly infiltrating systems. This makes defending against such breaches incredibly difficult.”

A Call for Stronger Cybersecurity Measures

In the wake of the breach, cybersecurity experts are calling for stronger measures to prevent similar incidents in the future. Dr. Erdal Ozkaya, a renowned cybersecurity strategist, emphasized the importance of endpoint security and network observability in mitigating the risks posed by hackers. “The attack on Disney highlights how crucial it is for companies to invest in robust cybersecurity measures, particularly when it comes to securing endpoints and monitoring network traffic for unusual activity,” Ozkaya said.

He added, “Phishing remains one of the most common entry points for attackers, and training employees to recognize these attacks is critical. But beyond training, companies need to implement systems that can detect and prevent unauthorized access in real time.”

Insider risk programs have become increasingly popular among companies looking to protect themselves from these types of attacks. “Organizations often underestimate the importance of having a robust risk management framework that quantifies the probability and impact of insider threats,” said Tim Burr, a leading IT executive. “Without that, it’s difficult to show a return on investment for cybersecurity programs aimed at preventing insider breaches.”

The Growing Role of Hacktivism

Nullbulge, the group responsible for the Disney hack, claims to be a Russia-based hacktivist group advocating for artist rights. However, security researchers believe the attack may have been carried out by a lone individual based in the United States. In a direct message via X (formerly Twitter) in July, Nullbulge claimed they accessed Disney’s data through a compromised device belonging to a software development manager.

“Whether this was the work of a group or an individual, the impact is the same,” Ozkaya said. “Hacktivism has blurred the lines between activism and criminality, with personal data and corporate secrets often becoming collateral damage in their efforts to make a statement.”

As companies increasingly rely on digital communication platforms like Slack, the attack underscores the vulnerabilities that exist within modern workplace systems. Nullbulge’s method of accessing Disney’s systems by exploiting Slack cookies is a stark reminder of how seemingly small weaknesses can lead to massive breaches. “This breach should serve as a wake-up call for any organization using cloud-based communication tools,” Ozkaya emphasized. “Proper encryption, multi-factor authentication, and endpoint security are non-negotiable.”

Long-Term Reputational Costs

While Disney has stated that the breach had no material impact on its financial performance, the long-term consequences may be more significant. The exposure of financial details, personal information, and internal communications not only opens Disney up to reputational damage but also legal challenges, especially if personal data is used maliciously.

“Data breaches aren’t just about short-term financial impact—they have long-term reputational costs, especially for a brand like Disney that relies heavily on consumer trust,” said Ravi Hirolikar, a seasoned CISO and cybersecurity advisor. “The cost of restoring that trust, particularly when personal data is involved, is enormous.”

In the increasingly complex arena of cybersecurity, experts agree that breaches like this are inevitable, but what matters most is how companies respond. Khwaja Shaik noted, “Boards need to view cybersecurity not just as a compliance issue but as a core part of their business strategy. The future of business hinges on the ability to safeguard data and build a culture of trust.”

A Lesson for Enterprise-Level Organizations

The Disney data breach is another wake-up call to enterprise-level organizations that no company is immune from cyberattacks, no matter how large or well-funded. As companies like Disney continue to digitize operations and rely on cloud infrastructure, the importance of robust cybersecurity measures cannot be overstated. The breach also highlights the growing role of insider threats and the need for companies to address these risks proactively.

For Disney, the road ahead will likely involve increased investment in cybersecurity and a renewed focus on protecting both corporate secrets and the personal data of its customers and employees. For the broader corporate world, the breach serves as another reminder to treat data security as not just a technical issue but a critical component of overall business strategy.

As Steve Layne put it, “Imagine the economic cost of this incident and the return on investment of a strong insider risk program that could have prevented it.”

]]>
607567
Palo Alto Networks Completes Purchase of IBM’s QRadar SaaS Assets https://www.webpronews.com/palo-alto-networks-completes-purchase-of-ibms-qradar-saas-assets/ Thu, 05 Sep 2024 19:16:17 +0000 https://www.webpronews.com/?p=607562 Palo Alto Networks has closed a deal for the Software as a Service assets of IBM’s QRadar, bolstering the company’s threat detection capabilities.

QRadar Suite “is a modernized threat detection and response solution designed to unify the security analyst experience and accelerate their speed across the full incident lifecycle.” The suite leverages enterprise-grade AI and automation to improve response, analysis, and overall security.

Palo Alto announced it has completed its acquisition of the QRadar assets, and will see IBM be a preferred managed security services provider. IBM also committed to further deploying Palo Alto’s “security platforms with the deployment of Cortex XSIAM for its own next-gen security operations, and Prisma SASE 3.0 for zero-trust network security to safeguard more than 250,000 of its global workforce.”

The company says customers will see the following benefits:

  • Seamless Migration: Palo Alto Networks, alongside IBM Consulting and its team of security experts, will offer free migrations services to eligible customers, ensuring a smooth transition to the Cortex XSIAM® platform while retaining existing best practices .
  • Enhanced Security Operations: Cortex XSIAM integrates multiple SOC tools into a Precision AI-powered platform, to provide comprehensive functionality, reduce manual workload and enable more effective threat response.
  • Advanced Analytics and Automation: Cortex XSIAM uses Precision AI-powered analytics to consolidate security alerts into fewer high-priority incidents
  • IBM Consulting Platform Support: The companies will offer immersive experiences for customers interested in adopting Palo Alto Networks security platformization, and IBM is training over 1,000 consultants on Palo Alto Networks security solutions.
  • On-Premises Customer Continuity: QRadar clients who remain on QRadar on-prem will continue receiving IBM features and support. QRadar SaaS customers will also receive uninterrupted customer service and support until they are ready to move to Cortex XSIAM.

“We are on a mission to help organizations transform their security operations and harness the potential of Precision AI-powered platforms to better protect their businesses,” said Nikesh Arora, Chairman and CEO, Palo Alto Networks. “Our partnership with IBM reinforces our commitment to innovation and our conviction in the tremendous benefit of QRadar customers adopting Cortex XSIAM for a robust, data-driven security platform that offers transformative efficiency and effectiveness in defending against evolving cyber threats.”

“Together, IBM and Palo Alto Networks are shaping the future of cybersecurity for our customers and the industry at large,” added Arvind Krishna, Chairman and CEO, IBM. “Working with Palo Alto Networks will be a strategic advantage for IBM as our two companies partner on advanced threat protection, response, and security operations using Cortex XSIAM and watsonx, backed by IBM Consulting. At the same time, IBM will continue innovating to help secure organizations’ hybrid cloud environments and AI initiatives, focusing our investments on data security and identity and access management technologies.”

]]>
607562
AT&T Sues Broadcom Over VMware ‘Breach of Contract’ https://www.webpronews.com/att-sues-broadcom-over-vmware-breach-of-contract/ Thu, 05 Sep 2024 11:30:00 +0000 https://www.webpronews.com/?p=607532 AT&T is fighting back against Broadcom’s efforts to change the terms for existing VMware users, suing the company over alleged breach of contract.

Since Broadcom purchased VMware, the company has been aggressively increasing prices and alienating customers by canceling existing licenses in favor of subscriptions that cost orders of magnitude more. AT&T appears to have had enough and is suing Broadcom.

The telecom company outlined its complaint, and asked the court for injunctive relief.

Plaintiff AT&T Services, Inc. brings this action for breach of contract, breach of the implied covenant of good faith and fair dealing, declaratory judgment, and injunctive relief against Defendants Broadcom Inc., as successor-in-interest to VMware, Inc., and VMware, Inc.1 AT&T’s allegations are based on knowledge as to its own acts, and on information and belief as to all other matters except where expressly indicated.

Broadcom Allegedly Reneging On Contracts

AT&T goes on to describe Broadcom’s habit of canceling perpetual licenses in favor of expensive subscriptions.

Almost immediately, Broadcom began reshaping VMware’s business model from selling stand-alone perpetual software licenses to selling more expensive subscription software licenses bundled with additional products and services. As its new owner, Broadcom has every right to change VMware’s business model prospectively. What it cannot do, however, is retroactively change existing VMware contracts to match its new corporate strategy. But that is exactly what Broadcom seeks to do here.

AT&T then accuses Broadcom of trying to hold it hostage by withholding support.

Specifically, Broadcom is threatening to withhold essential support services for previously purchased VMware perpetually licensed software unless AT&T capitulates to Broadcom’s demands that AT&T purchase hundreds of millions of dollars’ worth of bundled subscription software and services, which AT&T does not want.

Not only is Broadcom contractually obligated to continue providing the software support services, but without them AT&T has no way to ensure the VMware software installed on approximately 8,600 AT&T servers that deliver services to millions of AT&T customers worldwide will continue to operate.

AT&T goes on to say that many of the services being run by VMware instances are critical to the US government, as well as public safety organizations, such as emergency personnel, firefighters, paramedics, and police officers.

AT&T acknowledges that its original support period expires on September 8, 2024. However, the two parties signed an amendment to their agreement in 2022, one which gives AT&T the option to renew and extend support for up to two additional years.

Although AT&T has already advised Broadcom that it is exercising its option to renew support services for at least another year, Broadcom is refusing to honor AT&T’s renewal.

Instead, Broadcom states it will only continue to provide support services if AT&T agrees to purchase scores of subscription services and software that: (1) AT&T does not want or need; (2) would impose significant additional contractual and technological obligations on AT&T; (3) would require AT&T to invest potentially millions to develop its network to accommodate the new software; (4) may violate certain rights of first refusal that AT&T has granted to third-parties; and (5) would cost AT&T tens of millions more than the price of the support services alone.

AT&T’s Is An Increasingly Common Story

The telecom company goes on to say that many critics and customers predicted that Broadcom would engage in this behavior when its purchase of VMware was first announced. The company is correct, with many at the time expressing concern over what the future might hold.

Unfortunately, those fears have been realized, with organizations around the world having their own horror stories related to their interactions with Broadcom. In fact, EU cloud organization CISPE has appealed to regulators, saying “public sector bodies, large European businesses, SMEs and start-ups are all threatened by egregious and unwarranted new contract terms and price increases.” The organization went on to warn that Broadcom’s actions would decimate the EU cloud.

“As well as inflicting financial damage on the European digital economy, these actions will decimate Europe’s independent cloud infrastructure sector and further reduce the diversity of choice for customers,” said Francisco Mingorance, secretary general of CISPE. “Dominant software providers, in any sector from productivity software to virtualisation, must not be allowed to wield life or death power over Europe’s digital ecosystems.”

Some customers have had enough, and are increasingly looking at competitors’ offerings. In fact, evidence suggests Broadcom’s tactics recently cost it a 24,000 license customer.

With large customers like AT&T willing to go through the expense of a lawsuit over Broadcom’s actions, it’s a safe bet the VMware owner’s troubles are just beginning.

]]>
607532
Did Delta’s Aging IT Systems Turn a Tech Outage Into a $500 Million Disaster? https://www.webpronews.com/did-deltas-aging-it-systems-turn-a-tech-outage-into-a-500-million-disaster/ Wed, 04 Sep 2024 10:33:18 +0000 https://www.webpronews.com/?p=607494 In the wake of Delta Air Lines’ prolonged recovery following the CrowdStrike-induced tech outage in July 2024, a complex blame game has ensued. Central to the discussion is whether Delta’s outdated IT systems hampered its ability to recover swiftly, while competitors bounced back in days. The dispute has become a legal battle between Delta, CrowdStrike, and Microsoft, each pointing fingers as the primary source of Delta’s operational meltdown.

The Wall Street Journal wrote an article titled “The Day Delta’s ‘On-Time Machine’ Broke, and the Blame Game It Sparked,” which inspired WebProNews to investigate the aging IT issue further. 

The Crash that Grounded Delta

On July 19, 2024, a faulty update from CrowdStrike’s Falcon antivirus software crashed millions of Microsoft Windows-based computers across the globe. Airlines, hospitals, banks, and other industries were hit, but Delta Air Lines experienced the brunt of the fallout. The Atlanta-based airline canceled over 7,000 flights within five days, leaving thousands of passengers stranded, while other airlines, also affected, managed to resume operations in a fraction of that time.

According to Delta CEO Ed Bastian, the damage to Delta was significant—$500 million in losses—and largely due to the airline’s reliance on Microsoft’s Windows systems. “The CrowdStrike update took us out,” Bastian said in an interview with CNBC, “but the recovery process was excruciatingly slow. We had to manually reset 40,000 servers.” While Delta scrambled to get its systems back online, other airlines that use similar tech infrastructure returned to normal operations within a couple of days. This disparity has fueled speculation that Delta’s aging IT systems exacerbated the crisis.

Was Delta’s IT Outdated?

Microsoft and CrowdStrike have strongly pushed back against Delta’s narrative, with both companies implying that Delta’s IT infrastructure was behind the times. “Our preliminary review suggests that Delta, unlike its competitors, has not modernized its IT infrastructure, either for the benefit of its customers or for its pilots and flight attendants,” Microsoft attorney Mark Cheffo wrote in a letter to Delta.

CrowdStrike echoed this sentiment, stating, “It’s clear that Delta’s IT decisions—such as its heavy reliance on outdated systems—played a role in their slow recovery. We worked quickly to address the initial problem, but Delta did not act on the solutions offered.”

According to sources familiar with Delta’s internal operations, much of the company’s critical crew-tracking software, which schedules pilots and flight attendants, was running on older systems that were ill-prepared to handle a disruption of this magnitude. A former Delta executive noted, “This crew-tracking system was not robust enough to catch up quickly once it was overwhelmed by the backlog of data during the outage.”

Bastian Defends Delta’s IT Investments

Bastian has remained adamant that Delta’s systems were not to blame for the drawn-out recovery. “Since 2016, we’ve invested billions in IT upgrades and infrastructure,” he said, arguing that the severity of the outage was unprecedented and that Delta was simply unlucky. “We recognized the need to move away from older systems and had already been overhauling our crew-tracking software, which has performed well in previous disruptions.”

Delta’s COO Mike Spanos, who left the airline shortly after the incident, backed this claim, saying, “In the heat of the moment, we opted not to cancel flights en masse, believing a more surgical approach would help minimize customer impact. In hindsight, a more aggressive cancellation strategy could have reduced the chaos.”

However, despite these assertions, internal reports from Delta pilots and union officials suggest that Delta’s systems, particularly its Gate Keeper and crew-tracking systems, were slow to recover and overdue for a major overhaul. “The crew-tracking system is ancient,” said a Delta pilot who asked to remain anonymous. “We were relying on outdated technology, and it failed when we needed it the most.”

The Legal Fallout

The blame game reached its peak when Delta hired David Boies, a high-profile litigator, to pursue damages against both CrowdStrike and Microsoft. In a letter to CrowdStrike dated July 29, Delta claimed that the tech company was responsible for “substantial harm” and demanded $500 million in compensation. CrowdStrike’s response was swift and firm, with a company spokesperson calling Delta’s legal threats “unreasonable” and asserting that their liability was contractually capped at single-digit millions.

Delta’s expectations, according to CrowdStrike, were unrealistic. “Delta wanted us to take full responsibility without any substantiation,” said a CrowdStrike representative. “They didn’t take into account the role their own IT decisions played in prolonging their recovery.”

Microsoft also entered the fray, accusing Delta of declining offers of help. “Microsoft made daily offers to assist Delta,” Cheffo wrote, “but these were turned down repeatedly,” Microsoft claims that Delta’s crew-tracking system, which does not run on Microsoft platforms, was the primary cause of the delays. “This was not a Microsoft problem,” a source close to the company said. “Delta’s internal systems weren’t able to handle the situation.”

Outdated Infrastructure: The Elephant in the Room?

While the CrowdStrike software update has been identified as the trigger for the meltdown, many industry experts and insiders argue that Delta’s underlying technology was the real culprit behind the airline’s delayed recovery. Delta’s rivals, including American Airlines and United, also faced the same faulty update, yet they managed to bounce back far more swiftly, with minimal disruption to their schedules. So why did Delta, a company that prides itself on operational efficiency and punctuality, stumble so badly in its recovery?

According to Microsoft, Delta’s reliance on outdated infrastructure played a significant role in dragging out the recovery process. In a letter from Microsoft’s attorney, Mark Cheffo, the tech giant did not mince words, stating that Delta “apparently has not modernized its IT infrastructure, either for the benefit of its customers or for its pilots and flight attendants.” Microsoft further highlighted that Delta had declined offers of technical support during the outage, raising questions about how prepared Delta really was for such a crisis.

Delta Turned Down Help From Microsoft

“Delta was offered daily assistance from Microsoft starting July 19, when the outage occurred, through July 23,” said Cheffo. “Yet, each time, the airline turned it down. Senior executives from Microsoft also reached out to their Delta counterparts, including CEO Ed Bastian, but they received no response.” This narrative paints a picture of an airline caught flat-footed by the outage and unwilling, or unable, to accept help when it was most needed.

But Delta has pushed back hard on this assertion. In an internal memo to employees, Delta CEO Ed Bastian claimed the airline had made significant investments in its IT systems over the years, citing billions of dollars spent on technology upgrades since 2016. “We have a long track record of investing in safe, reliable, and elevated service for our customers and employees,” Bastian wrote. He also underscored that Delta’s recovery challenges were a direct result of its heavy reliance on Microsoft and CrowdStrike systems. “It’s important to recognize that Delta’s IT infrastructure is among the most complex in the industry, and the failure of CrowdStrike’s update to properly integrate with Microsoft Windows caused significant disruptions that could not be easily fixed.”

Delta’s Crew Tracking System “Limping Along for Years”

However, others within the airline and cybersecurity sectors suggest that Delta’s IT infrastructure has long been overdue for a comprehensive overhaul. According to a pilot union leader, who requested anonymity, Delta’s crew-tracking system — a critical component that matches pilots and flight attendants to flights — was already a known weak point before the CrowdStrike outage. “That system has been limping along for years. It doesn’t surprise me that it crashed so spectacularly,” the union leader said. “The sheer volume of data and operational complexity was just too much for the outdated system to handle once the outage hit.”

Moreover, internal documents viewed by The Wall Street Journal in its article (linked above) reveal that Delta has been planning to modernize its crew IT systems for years, but much of that work was slowed down during the COVID-19 pandemic. A presentation to pilots in June of 2024, just weeks before the outage, outlined a roadmap for updating the crew-tracking infrastructure. Still, it was not clear if those changes would have been in place in time to mitigate the fallout from the CrowdStrike incident. “We recognize the need to move away from these 40-plus-year-old systems,” said Philip Higgins, Delta’s managing director of operations, in a recording of the meeting.

Delta IT “Running of Fumes for Years”

Yet, the pace of these upgrades raises questions about Delta’s prioritization of its tech investments. A former Delta executive, speaking on the condition of anonymity, was more critical, suggesting that the airline’s leadership has been too focused on short-term cost savings at the expense of long-term resilience. “Look, it’s no secret that Delta likes to spend money on things passengers can see — the flashy airport lounges, the new planes, the service upgrades. But when it comes to the backbone of their operation, they’ve been running on fumes for years,” the former executive said. “This outage exposed the cracks in that approach. You can’t keep running mission-critical systems on legacy infrastructure and expect everything to hold together when disaster strikes.”

Delta’s crew-tracking system was at the heart of the airline’s operational meltdown, taking days to catch up with the flood of delayed and canceled flights. Pilots and flight attendants were stranded in the wrong cities and unable to be reassigned to new flights because the system couldn’t process the backlog of data fast enough. The airline was forced to rely on manual processes to match available crew to planes — a method that was not only inefficient but entirely unsustainable given the scale of the disruption. “We had pilots who couldn’t get through to the system for days,” said a Delta pilot, who was caught in the chaos. “We were being asked to self-report where we were on Monday because the airline had lost track of us by Friday. It was an absolute mess.”

Delta’s Outage a “Wakeup Call” to the Entire IT Industry

Adding to the complexity, Delta’s scheduling systems, including its Gate Keeper program, which manages the flow of planes through the airline’s Atlanta hub, also struggled to recover from the outage. The software snarls forced Delta to reduce traffic to just 20 arrivals and departures per hour, far below the normal 50 to 60 flights per hour, resulting in a cascading effect throughout Delta’s global network. Thousands of flights were canceled, and tens of thousands of passengers were left stranded at airports across the world. “The slow recovery wasn’t just about fixing computers — it was about trying to untangle a massive web of disrupted operations,” said John Laughter, Delta’s senior vice president of operations.

This slow recovery and the evident strain on Delta’s aging systems have led to a deeper conversation within the aviation industry about how airlines allocate resources to their IT infrastructure. For years, airlines have been heavily dependent on complex, interlocking systems to manage everything from ticket sales to flight operations. “When those systems work, they’re invisible,” said one airline IT consultant who has worked with major U.S. carriers. “But when they fail, the cracks become very visible, very quickly. Delta’s outage should be a wake-up call not just for them, but for the entire industry.”

Delta IT Checklist (ASAP!)

Delta’s technology woes may lead to substantial changes in how the airline approaches its IT investments. Bastian has acknowledged that Delta is conducting a thorough review of its response to the outage, and the airline is reportedly considering a new vendor to replace CrowdStrike. However, replacing one software provider may not be enough to resolve the deeper issue of outdated infrastructure. “This isn’t just about blaming CrowdStrike or Microsoft,” said a former Delta IT manager. “This is about Delta taking a hard look at the systems they’ve been relying on for decades and realizing that if they don’t modernize now, they’re going to keep facing these kinds of crises.”

What Delta Should Do ASAP to Modernize Its IT:

In the aftermath of Delta Air Lines’ disastrous IT outage caused by a faulty CrowdStrike software update in July 2024, the question of whether Delta’s outdated IT infrastructure exacerbated the airline’s slow recovery has become a focal point of discussion. While the blame game between Delta, CrowdStrike, and Microsoft continues, it is clear that Delta’s reliance on legacy systems has left the airline vulnerable to such operational meltdowns. Modernizing Delta’s IT infrastructure is not just an operational necessity—it’s a strategic imperative.

WebProNews analyzed reams of IT expert talking points following Delta’s IT breakdown, and it comes down to just a few clear strategic IT initiatives. Delta should consider implementing the following steps as soon as possible to safeguard its operations from future disruptions and maintain its reputation as an industry leader in reliability.

1. Move to a Cloud-First Approach

Delta has long relied on on-premises systems, particularly in areas such as crew scheduling, gate management, and customer service. While cloud technology has been revolutionizing industries for over a decade, Delta’s commitment to fully transitioning to the cloud has been slow.

“Delta’s infrastructure is a patchwork of legacy systems and on-premise software that’s been duct-taped together over the years,” said an IT consultant familiar with Delta’s operations. “The first step toward real modernization is adopting a cloud-first strategy.”

The cloud offers the scalability, flexibility, and redundancy that Delta needs to avoid future disasters. Cloud-based systems would allow Delta to scale up resources when necessary, better handle the vast amounts of data generated by airline operations, and ensure that systems can quickly recover from failures.

What Delta should do: Transition critical applications, such as crew scheduling and gate management, to cloud-based solutions. This could involve partnerships with cloud service providers like Amazon Web Services (AWS) or Microsoft Azure, ensuring real-time access to vital systems from anywhere, along with built-in redundancy and disaster recovery options.

2. Overhaul Crew Scheduling and Operations Systems

One of the most significant pain points during the recent outage was the failure of Delta’s crew scheduling system, which left pilots and flight attendants stranded in the wrong locations with no easy way to track or reassign them. This outdated system, running on legacy software, couldn’t handle the backlog of flights and crews, contributing to Delta’s slow recovery.

A former Delta pilot described the chaos, saying, “The system was completely overwhelmed. We were essentially invisible to the airline for days, and that led to more cancellations and delays than necessary. It’s a system that should have been upgraded years ago.”

Modernizing crew scheduling with more robust, real-time solutions is critical. These systems must be able to process large amounts of data instantly, ensure crews are in the right place at the right time, and quickly recover from outages or other disruptions.

What Delta should do: Invest in modern crew management software that can handle the airline’s complex operations. Leveraging AI and machine learning could optimize crew allocation during irregular operations (IROPS) and provide predictive analytics for more efficient planning.

3. Invest in AI-Driven Operational Resilience

Airlines are complex, interconnected systems, and disruptions can ripple through them rapidly. Artificial intelligence (AI) has the potential to predict disruptions, optimize recovery, and offer real-time solutions to mitigate the impact of technical failures.

AI-driven tools can help airlines like Delta analyze vast amounts of operational data to foresee delays, optimize flight paths, and even provide real-time recommendations during operational crises. Several airlines, including Delta’s competitors, have already begun integrating AI-driven solutions into their operations to enhance resilience.

“AI offers a way for airlines to preemptively manage disruptions, rather than just reacting after the fact,” said an industry expert in aviation technology. “It’s a tool that, when fully integrated, could have minimized the impact of this summer’s outage.”

What Delta should do: Partner with AI-focused technology firms to implement predictive maintenance systems, real-time crew management tools, and optimization algorithms that can keep operations running smoothly during both normal conditions and disruptions.

4. Modernize Gate and Ground Operations Technology

Delta’s Atlanta hub is one of the busiest in the world, and during the CrowdStrike outage, gate operations ground to a halt. The system that controls the movement of planes through Delta’s hub, known as Gate Keeper, became a choke point during the crisis, reducing the number of flights Delta could handle by more than 50%. This technology, much like the crew scheduling system, needs an immediate overhaul.

Beyond gate management, the airline’s ground operations technology—systems that manage baggage, boarding, and customer service—need to be fully modernized. Any delays or malfunctions in these systems have a direct impact on Delta’s ability to deliver on its promise of timely, efficient service.

“Gate and ground operations are critical cogs in the machine of any airline. The moment these systems start faltering, the entire network can fall apart,” said a former Delta executive. “Delta can’t afford to be complacent with outdated systems in this area.”

What Delta should do: Upgrade the Gate Keeper system and other ground operations technology with modern, cloud-based solutions that are capable of handling large volumes of data and traffic. Implement redundancy and failover systems to ensure smooth operations even in the event of a failure.

5. Develop a Robust Disaster Recovery Plan

One of the most glaring lessons from the CrowdStrike outage is the importance of having a robust disaster recovery (DR) and business continuity plan (BCP). While Delta undoubtedly had some level of disaster recovery in place, the scale and duration of the outage suggest that these plans were either insufficient or not properly executed.

“Other airlines experienced the same software bug but managed to recover quickly because they had well-thought-out disaster recovery plans,” said an industry insider. “Delta’s delayed recovery is evidence that they need to go back to the drawing board.”

What Delta should do: Create a comprehensive disaster recovery plan that includes regular testing, real-time simulations, and redundant systems that can be quickly brought online in the event of a failure. This plan should encompass all critical systems, including crew scheduling, flight operations, customer service, and IT infrastructure.

6. Enhance Cybersecurity and Vendor Management

The relationship between Delta, CrowdStrike, and Microsoft is strained, with each side pointing fingers over the July outage. While Delta works to modernize its IT infrastructure, it must also reassess its cybersecurity posture and vendor management practices.

CrowdStrike’s security software was supposed to protect Delta, but when the update went wrong, the reliance on third-party software proved to be a vulnerability. In a world where cyber threats are increasing, Delta can no longer afford to leave its cybersecurity entirely in the hands of external vendors.

What Delta should do: Conduct a thorough audit of all third-party software and service providers, and implement strict oversight to ensure all systems are secure and regularly updated. Delta should also consider bringing more cybersecurity capabilities in-house to reduce its reliance on external vendors for mission-critical systems.

7. Rethink IT Talent and Culture

A critical component of any IT modernization effort is the people behind it. Delta has historically outsourced much of its IT work, relying on vendors like IBM and other managed service providers. While this may reduce costs in the short term, it can lead to a lack of institutional knowledge and agility during times of crisis.

In the long run, Delta needs to invest in building a strong internal IT team capable of handling both day-to-day operations and emergencies. This team must also foster a culture of innovation and continuous improvement to keep pace with technological advancements.

What Delta should do: Focus on recruiting top IT talent, particularly in cloud computing, AI, and cybersecurity. Delta should also invest in training and development to ensure its in-house IT team is well-equipped to handle future challenges.

The CrowdStrike outage was a wake-up call for Delta Air Lines, exposing deep vulnerabilities in its IT infrastructure. Delta needs to act decisively and swiftly to modernize its technology to prevent a repeat of the summer’s chaos. Delta’s path to IT modernization is clear from cloud migration and AI implementation to stronger cybersecurity and a robust disaster recovery plan. The question now is whether the airline will seize this opportunity and future-proof its operations—or risk being left behind in an industry that depends more than ever on technology.

A Call for Accountability

Delta’s meltdown has sparked broader conversations about the vulnerability of modern airlines to tech outages and the need for robust disaster recovery plans. Industry experts are quick to point out that while tech failures like the CrowdStrike update are unpredictable, the ability to recover quickly is a matter of preparation.

One senior IT consultant remarked, “It’s not about whether your system will fail; it’s about how quickly you can get it back online. Delta’s response shows that they didn’t have a strong enough business continuity plan in place, and that’s what really prolonged the chaos.”

As the legal battle unfolds, the question of whether Delta’s outdated IT infrastructure exacerbated the recovery remains at the center of the controversy. What’s clear is that in today’s interconnected world, airlines—and other industries—are only as resilient as their tech systems allow them to be. Delta’s experience may well serve as a cautionary tale for companies relying on aging systems in an era of rapid technological advancement.

It should be noted that decisions on IT spending are always obvious in hindsight and difficult to make when operations are running smoothly. Delta is the largest airline in the world and made strategic decisions to go with premium partners like Microsoft and Crowdstrike. The company clearly was spending a lot on infrastructure via these partnerships.

The question is, did Delta wait too long to upgrade other company-controlled software and IT systems that were critical to their operations, or were they simply victims of a bad software update by a valued partner? The real answer is likely very nuanced, with plenty of blame to go around. In other words, shit happens.

]]>
607494
The Brave New World of the AI-Leveraged Network https://www.webpronews.com/the-brave-new-world-of-the-ai-leveraged-network/ Wed, 04 Sep 2024 08:46:09 +0000 https://www.webpronews.com/?p=607491 As artificial intelligence (AI) continues to revolutionize industries and business operations, the focus is now shifting to the underlying infrastructure that will power this transformation: the network. AI’s data-hungry applications require robust, high-speed networks capable of handling immense data flows, ensuring low latency, and maintaining security. But as businesses venture into this brave new world, the question remains: are current networks equipped to handle AI’s growing demands? The answer is a resounding no. Networks must evolve, and businesses need to rethink their strategies to support the full potential of AI.

The AI Boom: How Networks are Being Tested

The explosion of AI-driven operations is pushing network infrastructure to its limits. According to Pierre-Marie Binvel, Director of Connectivity Product Marketing at Orange Business, “AI represents a paradigm shift for networks, similar to the internet boom of the late 1990s.” He explains that networks are no longer just the connective tissue between data centers, devices, and the cloud. Today, they play a critical role in determining how effectively AI can be deployed across industries.

“Enterprises are investing heavily in AI and generative AI models, but without the network to support them, these investments will fall short,” Binvel adds. “AI models require massive data transfers and real-time processing, both of which depend on a high-performance network that can handle low latency and congestion management.”

Paul McMillan, an AI strategist, echoes this sentiment. “AI applications, particularly generative AI, are incredibly data-intensive, and they need agile, responsive networks. It’s not just about bandwidth anymore; it’s about speed, reliability, and the ability to scale quickly.” McMillan emphasizes that AI models require networks to be as dynamic as the AI algorithms themselves.

Networks: The New Bottleneck

As AI models grow more complex, they require more data and computational power. But the real bottleneck, according to Serge Lucio, VP at Broadcom Inc., lies not in computing power, but in the network. “AI systems are highly reliant upon the underlying network infrastructure. When AI data and workloads are distributed across multiple nodes, you need high bandwidth and low latency networks to ensure everything runs smoothly,” he notes.

Lucio further explains that even as companies upgrade their computational resources with GPUs and TPUs, the performance of AI applications is only as good as the network supporting them. “It’s not enough to focus on processing speed. Without a network that can efficiently distribute and process data, AI systems will hit a performance wall.”

Data Movement: The Lifeblood of AI

AI-driven applications operate on vast amounts of data. Whether it’s training a machine learning model or deploying AI algorithms in real-time, the movement of data is crucial. The traditional data center-centric network model simply doesn’t cut it anymore. “In the past, data mostly flowed from the cloud to a local site and back,” says McMillan. “With AI, data is constantly moving from cloud to cloud, site to site, and cloud to site. The direction of travel has changed, and so must the network.”

Binvel agrees, pointing out that decentralized data storage is becoming the norm. “Data is increasingly created and processed at the edge of networks, outside of traditional data centers. AI applications running on edge devices rely heavily on real-time data processing, which requires networks to have the capacity, bandwidth, and low latency to ensure seamless operations.”

Building the AI-Ready Network

To create a network that can support AI’s vast demands, businesses need to rethink their strategies. Pierre-Marie Binvel outlines four key considerations for building an AI-capable network:

1. Planning for the Future
“Businesses need to plan for their AI use cases,” Binvel asserts. “By understanding where AI is hosted and how it’s used, enterprises can assess the network’s transformation path.” This planning requires an analysis of compute power, connectivity needs, and how data will be moved to optimize the total cost of ownership.

2. Prioritizing Privacy, Security, and Compliance
AI’s reliance on decentralized data storage presents significant security challenges. “With more data in transit, the complexity of securing this data increases,” notes McMillan. Regulatory compliance becomes a priority, and businesses must ensure that their networks allow for secure, compliant data handling while still maintaining flexibility.

3. Governance and Visibility
A network’s ability to enable AI governance is critical to avoiding errors, breaches, and security exposures. “Good governance requires clear visibility into network infrastructure and how it handles data,” says Binvel. Without trust in the network’s ability to manage data securely, businesses may struggle to deploy AI confidently.

4. Managing Traffic and Congestion
As AI applications scale, so too will network traffic. McMillan emphasizes the importance of congestion management: “More data on the network means more potential for bottlenecks. Businesses must implement systems that can handle traffic spikes, ensuring that AI applications run without disruption.”

On-Demand Networks for the AI Age

One key solution to AI’s network demands is the development of on-demand networks. These networks are designed to adapt, flex, and scale as AI use cases evolve. According to Binvel, “The future of networking lies in building infrastructure that can respond dynamically to changing business needs, particularly those driven by AI.” He describes how companies like Orange Business are creating “network-as-a-service” platforms, which allow enterprises to scale network capabilities based on demand without requiring significant hardware investments.

McMillan concurs, noting that on-demand networks will play a critical role in ensuring AI’s success. “AI is pushing the boundaries of what networks can do, and on-demand solutions are the only way to keep up with the pace of AI innovation.”

A Brave New Network for AI

The future of AI is bright, but it hinges on the evolution of networks. As AI applications continue to grow in complexity, networks must adapt to handle the increased demand for data movement, low latency, and security. The brave new world of AI-leveraged networks will require businesses to rethink their strategies, prioritize network infrastructure, and invest in future-proof, on-demand solutions.

As McMillan aptly concludes, “AI is not just transforming businesses; it’s transforming networks. And for companies to fully realize the benefits of AI, they must first ensure their networks are ready to support the revolution.”

]]>
607491
Data Broker At Center of Data Leak Involving 170 Million Records https://www.webpronews.com/data-broker-at-center-of-data-leak-involving-170-million-records/ Tue, 03 Sep 2024 19:45:02 +0000 https://www.webpronews.com/?p=607469 Data broker People Data Labs (PDL) appears to be at the center of a massive data breach, one that has exposed at least 170 million records.

Cybernews reports that its research team found a dataset online that contained more than 170 million records. The dataset was exposed via an unprotected Elasticsearch server, although it was not directly connected PDL. As a result, the leak could be the result of a mishandled server from one of PDL’s partner companies.

The leaked data includes:

  • Full names
  • Phone numbers
  • Emails
  • Location data
  • Skills
  • Professional summaries
  • Education background
  • Employment history

Unfortunately, this is not the first time PDL has been involved in a data leak. As Cybernews reports, PDL suffered a data leak of more than a billion records in 2019. Interestingly, that data breach was also the result of an unprotected Elasticsearch, raising the possibility that this latest breach could be a subset of data from the original 2019 breach.

As the outlet points out, the breach brings increased scrutiny on the data broker industry.

“The existence of data brokers is already a controversial issue, as they often have insufficient checks and controls to ensure that data doesn’t get sold to the wrong parties. Leaking large segments of their datasets makes it easier and more convenient for threat actors to abuse the data for large-scale attacks,” said the Cybernews research team.

Unlike the EU, the US lacks comprehensive privacy legislation, meaning data brokers are not nearly as regulated as on the other side of the Atlantic. As a result, users’ data—as well as their privacy—continues to be collected, saved, bartered, sold, used, and abused.

While a data breach is never a good thing, hopefully it will add to the growing chorus of users, lawmakers, and critics who want more oversight of such companies.

]]>
607469
Revolutionizing IT: How AI-Driven SASE is Shaping the Future of Enterprise Technology https://www.webpronews.com/revolutionizing-it-how-ai-driven-sase-is-shaping-the-future-of-enterprise-technology/ Tue, 03 Sep 2024 11:25:57 +0000 https://www.webpronews.com/?p=607333 As organizations navigate the complexities of a digital-first world, the integration of artificial intelligence (AI) with Secure Access Service Edge (SASE) technology is rapidly transforming IT landscapes. SASE, a framework that converges networking and security into a single cloud-based service model, has been increasingly recognized as the future of enterprise IT management. With AI now playing a central role in this transformation, businesses are finding new ways to streamline operations, enhance security, and optimize the end-user experience.

At the forefront of this revolution is Palo Alto Networks, whose upcoming virtual event, SASE Converge 2024, promises to showcase the latest advancements in AI-powered SASE technology. The event will bring together industry leaders, technologists, and decision-makers to explore how AI can further elevate the capabilities of SASE, making it a cornerstone of modern IT infrastructure.

The Evolution of SASE: From Concept to Cornerstone

Secure Access Service Edge, or SASE, has been a buzzword in the IT industry for several years, but its adoption has accelerated in response to the growing demands of a hybrid workforce. According to Anand Oswal, Senior Vice President and General Manager of Network Security at Palo Alto Networks, “SASE is no longer just a concept; it’s now an essential strategy for enterprises looking to secure their networks while providing seamless access to a distributed workforce.”

The integration of AI into SASE has further amplified its potential, allowing organizations to automate complex security tasks and deliver faster, more reliable network performance. “AI-powered SASE is about more than just combining networking and security,” Oswal explains. “It’s about using AI to predict and prevent threats in real-time, optimize application performance, and provide a seamless experience for users, regardless of where they are.”

The AI Advantage: Enhancing Security and Efficiency

One of the key benefits of integrating AI with SASE is the ability to enhance security through automation and real-time threat detection. As cybersecurity threats become more sophisticated, traditional security measures are often inadequate. AI can analyze vast amounts of data in real-time, identifying patterns that could indicate a security breach before it occurs. This proactive approach is critical in an era where cyberattacks are increasingly common and costly.

Mignona Cote, Senior Vice President and Chief Security Officer at NetApp, emphasizes the importance of AI in modern cybersecurity strategies. “AI allows us to stay ahead of the curve by automating threat detection and response. In the context of SASE, AI not only improves security but also reduces the time and resources required to manage security operations,” she says.

Palo Alto Networks’ AI-powered SASE platform, Prisma SASE, exemplifies this approach. The platform offers a 50% reduction in data breach risk, a 107% average return on investment, and a 75% improvement in operational efficiency. These metrics underscore the transformative impact that AI can have on IT security and operations.

Transforming the IT Experience: From User to Administrator

AI-powered SASE is also revolutionizing the IT experience for both end-users and administrators. For end-users, the technology ensures a seamless and secure connection to applications, regardless of their location or device. This is particularly important in a hybrid work environment, where employees need to access corporate resources from various locations and devices.

“One of the most significant challenges in today’s IT environment is managing the myriad of devices that employees use,” explains Unnikrishnan KP, Chief Marketing Officer at Palo Alto Networks. “AI-powered SASE makes it easier to manage BYOD (Bring Your Own Device) policies and ensures that unmanaged devices do not become a security liability.”

For IT administrators, AI-powered SASE simplifies the management of complex networks and security policies. The technology enables centralized management, where policies can be applied consistently across all users and devices, reducing the potential for human error. Additionally, AI can automate routine tasks such as software updates and security patches, freeing up IT staff to focus on more strategic initiatives.

Anupam Upadhyaya, Vice President of Product Management at Palo Alto Networks, highlights the operational benefits of AI in SASE. “With AI, we’re able to automate many of the repetitive tasks that bog down IT teams. This not only improves efficiency but also reduces the risk of errors that can lead to security vulnerabilities,” he notes.

Real-World Applications: Customer Success Stories

The true value of AI-powered SASE becomes evident when examining real-world use cases. During SASE Converge 2024, Palo Alto Networks will present several customer success stories that demonstrate how organizations are leveraging the technology to enhance their IT and security operations.

One notable example is a global financial services firm that implemented Prisma SASE to manage its distributed workforce. The firm faced challenges with securing sensitive financial data while providing employees with fast, reliable access to applications. By adopting AI-powered SASE, the company was able to reduce its data breach risk by 60%, improve application performance, and achieve a 120% return on investment within the first year.

Another case study involves a multinational manufacturing company that used AI-powered SASE to streamline its IT operations. The company’s IT infrastructure was complex, with multiple legacy systems and a large number of remote workers. By implementing AI-powered SASE, the company was able to reduce its operational costs by 30% and significantly improve the user experience for its employees.

These examples illustrate how AI-powered SASE is not just a theoretical concept but a practical solution that delivers measurable benefits. As more organizations adopt this technology, its impact on the IT landscape is expected to grow.

The Future of SASE: What Lies Ahead

As SASE continues to evolve, the integration of AI will play an increasingly important role in shaping its future. Ofer Ben Noon, SASE Chief Technology Officer and Vice President of Prisma Access Browser GTM at Palo Alto Networks, envisions a future where AI-powered SASE becomes the standard for enterprise IT infrastructure. “We’re only scratching the surface of what’s possible with AI and SASE. As AI technology advances, we’ll see even more sophisticated use cases, from predictive analytics to fully autonomous networks,” he predicts.

Looking ahead, the convergence of AI, networking, and security within the SASE framework will likely lead to new innovations that further enhance the capabilities of IT organizations. These advancements will be crucial as businesses continue to navigate the challenges of digital transformation and the growing demands of a hybrid workforce.

Embracing the AI-Powered SASE Revolution

The integration of AI with SASE represents a significant shift in how organizations approach IT and security. By combining the strengths of AI with the flexibility of SASE, businesses can achieve a level of security, efficiency, and user experience that was previously unattainable.

As organizations prepare for the future, the adoption of AI-powered SASE will become increasingly essential. “The time to embrace AI-powered SASE is now,” says Swapna Bapat, Vice President of Product Management at Palo Alto Networks. “Those who do will be well-positioned to lead in the digital age, while those who don’t risk falling behind.”

SASE Converge 2024 will provide attendees with the knowledge and insights they need to harness the full potential of AI-powered SASE. From understanding the latest innovations to exploring real-world applications, the event promises to be a valuable resource for anyone involved in IT and security. As the industry moves forward, one thing is clear: AI-powered SASE is not just transforming IT—it’s redefining the future of enterprise technology.

]]>
607333
Generative AI Success for the Enterprise: Six Key Priorities https://www.webpronews.com/generative-ai-success-for-the-enterprise-pwc-reports-six-key-priorities/ Tue, 03 Sep 2024 08:06:29 +0000 https://www.webpronews.com/?p=607322 In the hyper-evolving world of generative AI, enterprises are racing to harness the potential of this transformative technology. Yet, the path to success is fraught with challenges, requiring a delicate balance between innovation and risk management. PwC’s recent “Early Days” Generative AI report highlights six key priorities that organizations at the forefront of AI adoption are focusing on to navigate these complexities and set the stage for long-term success.

1. Managing the AI Risk/Reward Tug-of-War

Generative AI has sparked both excitement and concern within the business community. On one hand, the technology promises unprecedented opportunities for competitive advantage; on the other, it raises significant ethical, legal, and technical risks. As PwC emphasizes, achieving a healthy balance between these competing forces is crucial.

“There’s a fascinating parallel between the excitement and anxiety generated by AI in the global business environment writ large, and in individual organizations,” PwC notes. The report underscores that while some companies have managed this tension effectively, others have struggled, leading to either paralysis or reckless decision-making with potentially disastrous consequences.

One example provided in the report involves a company that found itself at an impasse because the only team capable of validating its AI models was the same team that had developed them—a clear conflict of interest. In contrast, another company made swift progress by establishing a cross-functional leadership team early on to ensure enterprise-wide consistency and risk alignment. This proactive approach not only addressed the ethical implications of AI but also maintained momentum, allowing the company to leverage AI responsibly and effectively.

2. Aligning Generative AI with Digital Strategy

The second priority for successful AI adoption is aligning generative AI strategies with existing digital transformation efforts. PwC points out that many organizations are still grappling with their digital journeys, and the introduction of generative AI adds another layer of complexity.

“Generative AI’s primary output is digital—digital data, assets, and analytic insights, whose impact is greatest when applied to and used in combination with existing digital tools,” the report states. To maximize the benefits of AI, organizations must ensure that their AI initiatives complement and enhance their broader digital strategies.

A case study from the report illustrates this point vividly. A global consumer packaged goods company initially deployed generative AI in its customer service operations, automating the process of filling out service tickets and answering customer queries. However, as the company’s leaders explored further, they realized that the same AI models could be applied to other business functions, such as procurement and HR, leading to much greater gains than initially anticipated. This cross-functional approach exemplifies how integrating AI with digital strategies can unlock significant value.

3. Experimenting with an Eye for Scaling

Experimentation is key to discovering high-value applications of generative AI, but without a plan for scaling, even the most promising pilots can fall short of their potential. PwC’s report highlights the importance of engaging senior leadership early on to ensure that AI experiments are aligned with the organization’s broader strategic goals.

“Experimentation is valuable with generative AI because it’s a highly versatile tool, akin to a digital Swiss Army knife,” the report notes. However, the risk of “pilot purgatory”—where AI projects get stuck in the experimental phase without delivering tangible value—is real. To avoid this, PwC recommends identifying patterns and repeatable processes that can be scaled across the organization.

One example provided involves a financial services company that initially focused its AI efforts on automating HR tasks. By involving the CIO and CISO in these early experiments, the company was able to identify broader applications for the technology, leading to a more comprehensive and scalable AI strategy.

4. Developing a Productivity Plan

Generative AI’s ability to boost productivity is well-documented, but organizations need a clear plan for how to leverage these gains. PwC identifies three main approaches: reinvesting productivity gains to boost output, reducing labor input to cut costs, or pursuing a combination of both.

The report highlights several companies that have successfully implemented AI-driven productivity improvements. For instance, PwC firms in mainland China and Hong Kong reported efficiency gains of up to 50% in code generation and 80% in internal translation tasks, thanks to small-scale AI pilots. These productivity gains not only enhance operational efficiency but also improve employee satisfaction by eliminating repetitive, mundane tasks.

However, PwC warns that automating low-value work can shift the burden onto more strategic tasks, potentially leading to burnout. “Organizations will want to take their workforce’s temperature as they determine how much freed capacity they redeploy versus taking the opportunity to reenergize a previously overstretched employee base,” the report advises.

5. Putting People at the Heart of Your AI Strategy

One of the most critical priorities for generative AI success is ensuring that employees are fully engaged in the AI journey. PwC’s report highlights the importance of early and ongoing communication with employees about the role of AI and the benefits it can bring to their work.

“Our 26th Annual Global CEO Survey found that 69% of leaders planned to invest in technologies such as AI this year,” the report states. Yet, many employees remain uncertain or unaware of how these technologies will impact them. To bridge this gap, PwC recommends involving employees in the creation and selection of AI tools, providing customized training and upskilling opportunities, and fostering a culture that embraces human–AI collaboration.

One example of successful employee engagement comes from a financial services firm that encourages its employees to experiment with AI tools and even celebrates small-scale failures as part of the innovation process. “Failures mark innovation and are expected, and even celebrated,” the report notes, underscoring the importance of creating a safe environment for AI experimentation.

6. Collaborating with the Ecosystem

The final priority identified by PwC is the importance of working with external partners to unlock the full potential of generative AI. “Companies with a clear ecosystem strategy are significantly more likely to outperform those without one,” the report asserts.

In the healthcare and pharmaceutical industries, for example, collaboration between organizations has long been limited by privacy concerns and regulatory hurdles. However, AI is beginning to change that dynamic by enabling safe, large-scale data-sharing and data-pooling. PwC highlights the potential for generative AI to revolutionize drug development and precision medicine through the creation of synthetic datasets that can be shared across organizations.

By working closely with suppliers, service providers, customers, and other partners, organizations can not only enhance their own AI capabilities but also drive broader industry innovation. “The holy grail of healthcare and pharmaceutical firms is the ability to access patient records at scale and identify patterns that could uncover routes to more effective treatments,” the report notes, emphasizing the transformative potential of ecosystem collaboration.

Strategic GenAI Adoption is Key

As generative AI continues to reshape the business landscape, organizations must be strategic in their approach to adoption. PwC’s six key priorities—managing risk, aligning AI with digital strategy, experimenting with scaling in mind, developing a productivity plan, putting people at the center, and collaborating with the ecosystem—provide a roadmap for enterprises looking to succeed in this new era.

While it’s still early days for generative AI, the lessons learned from these early adopters will be invaluable as the technology matures. By following these priorities, organizations can not only navigate the complexities of AI adoption but also position themselves as leaders in the next wave of digital transformation.

]]>
607322
No, You Can’t Uninstall Windows Recall https://www.webpronews.com/no-you-cant-uninstall-windows-recall/ Tue, 03 Sep 2024 00:06:13 +0000 https://www.webpronews.com/?p=607309 Users hoping they would be able to uninstall Recall are in for a disappointment, with Microsoft saying the dialog box that suggested it was a bug.

Recall is Microsoft’s controversial AI-powered tool that takes snapshots of virtually everything a user does on their computer. The goal is to provide an easy way to for users to find documents, messages, videos, text, and more, using natural language prompts. Unfortunately, Microsoft’s initial implementation was a security nightmare, prompting the company to delay its rollout.

First spotted by Deskmodder, the latest windows update appeared to open the door to uninstalling Recall, listing it in the ‘Turn Windows features on or off’ section of the Control Panel. Unfortunately, Microsoft has no plans to allow users to uninstall Recall, saying the dialog was a bug.

“We are aware of an issue where Recall is incorrectly listed as an option under the ‘Turn Windows features on or off’ dialog in Control Panel,” Windows senior product manager Brandon LeBlanc told The Verge. “This will be fixed in an upcoming update.”

It’s a shame that Microsoft believes an option to uninstall Recall is something to “be fixed,” rather than giving users who value their privacy an opportunity to get rid of the feature.

]]>
607309
How AI is Transforming Enterprise Networking and Security https://www.webpronews.com/how-ai-is-transforming-enterprise-networking-and-security/ Mon, 02 Sep 2024 20:29:13 +0000 https://www.webpronews.com/?p=607289 Integrating Artificial Intelligence (AI) into enterprise networking and security is not just an option—it’s becoming necessary. The advent of AI, particularly in generative AI, has sparked what some experts call a “seminal transformational event” for enterprises worldwide. This transformation isn’t just about adopting new technologies; it’s about rethinking the foundations of how businesses operate, secure, and scale their networks.

The Evolution of AI in Enterprises: From Operations to Business Innovation

Shailesh Shukla, CEO of Aryaka, offers a comprehensive view of AI’s journey within the enterprise. He outlines a three-phase model of AI adoption that enterprises typically follow:

  1. Operational Efficiency: In the initial phase, enterprises leverage AI to automate repetitive tasks, driving operational efficiency across the board. AI tools streamline processes, reduce human error, and improve the organization’s overall productivity.
  2. Infrastructure Enhancement: The second phase involves integrating AI with large language models (LLMs) and linking them to domain-specific information within the company. This stage marks the rise of Retrieval-Augmented Generation (RAG), where AI not only accesses vast amounts of global information but also tailors it to meet the enterprise’s specific needs.
  3. Business Transformation: The real magic happens in the final phase—new business models emerge powered by AI. This phase is characterized by the development of innovative products and services that were previously unimaginable, thanks to the combined power of AI, RAG, and global infrastructure.

Shukla emphasizes that Aryaka, as a leader in networking and security, plays a critical role in each of these phases. “Networking and security have a very significant role to play in both the evolution and adoption of AI, as well as in using AI within the industry itself,” he explains.

AI and Enterprise Networking: A Symbiotic Relationship

One key area where AI is making a profound impact is enterprise networking. As organizations increasingly rely on AI-driven applications, a robust, scalable, and secure network infrastructure becomes paramount. Aryaka’s offerings, such as AI Perform, AI Secure, and AI Observe, are designed to address these challenges head-on.

Global Access and Performance: AI-driven applications require seamless global access, particularly those that rely on GPU-as-a-service or AI-as-a-service. Aryaka focuses on providing this access with high performance and resilience, ensuring that enterprise users can connect to AI services anywhere in the world without compromising on speed or security.

Security and Threat Detection: Security is another area where AI is making significant strides. Traditional security measures are no longer sufficient to combat the sophisticated threats posed by modern cybercriminals. AI enhances security by improving threat detection capabilities, enabling organizations to identify and neutralize threats before they can cause harm. As Shukla notes, “With the ability to look at a large number of variables and determine where there might be a threat, AI enables advanced threat hunting and detection.”

Knowledge Loss Prevention: Shukla introduced a novel concept, “knowledge loss prevention.” In an era where AI systems have access to vast amounts of enterprise data, protecting this knowledge from being leaked or misused is crucial. Aryaka is developing solutions to safeguard sensitive information, particularly against emerging threats like prompt injection attacks, which exploit vulnerabilities in AI systems to extract confidential data.

AI in Security: Beyond Traditional Measures

The traditional approach to security in enterprises has been reactive, relying heavily on predefined rules and manual monitoring. AI, however, is shifting this paradigm towards a more proactive and predictive model.

AI-Driven Access Control: One significant advancement in security is AI-driven access control. Integrating AI with legacy systems like Cloud Access Security Brokers (CASBs), Aryaka enhances user authentication and access management, ensuring that only authorized personnel can access sensitive AI applications. This approach not only strengthens security but also simplifies user permissions management across the enterprise.

Threat Hunting and Anomaly Detection: AI analyzes vast datasets in real time, making it an invaluable tool for threat hunting. By continuously monitoring network traffic and user behavior, AI can detect anomalies that might indicate a security breach. This capability allows enterprises to respond to threats more quickly and effectively.

The Future of AI in Enterprise Networking and Security

As AI evolves, its role in enterprise networking and security will grow more critical. Aryaka’s vision for the future includes expanding its AI offerings to cover all aspects of networking and security, from performance optimization to advanced threat detection and knowledge protection.

“Over time, just as we have unified networking and security into a comprehensive service with our SASE [Secure Access Service Edge] offerings, we aim to do the same for AI on a global scale,” Shukla explains. Aryaka’s AI Perform, AI Secure, and AI Observe are just beginning this journey, promising to deliver a new level of efficiency, security, and innovation to enterprises worldwide.

AI is not just another tool in the enterprise toolkit—it’s a transformative force reshaping how organizations approach networking and security. Companies that successfully integrate AI into their operations, infrastructure, and business models will generate a higher percentage of positive security outcomes as they focus on new security challenges.

]]>
607289
Big Data in Healthcare: 2024 and Beyond https://www.webpronews.com/big-data-in-healthcare-2024-and-beyond-top-5-concepts/ Fri, 30 Aug 2024 20:54:09 +0000 https://www.webpronews.com/?p=607118 In an era where technology is rapidly advancing, Big Data has become a cornerstone in the transformation of healthcare. The healthcare industry is leveraging vast amounts of data generated from various sources—electronic health records (EHRs), medical imaging, wearable devices, genetic data, and even social media—to revolutionize patient care, optimize operations, reduce costs, and forecast health trends. As we look towards 2024 and beyond, the applications of Big Data in healthcare are not just expanding; they are reshaping the very fabric of the industry.

1. Enhanced Patient Care and Outcomes

Big Data is playing a pivotal role in revolutionizing patient care by enabling more personalized, evidence-based treatment plans. The power of predictive analytics is at the forefront of this transformation. By analyzing historical data, machine learning algorithms can identify patterns and predict future health outcomes, allowing for the early detection of diseases such as cancer, diabetes, and cardiovascular conditions.

“Predictive analytics is a game-changer in healthcare. It allows us to intervene before a condition becomes critical, improving patient outcomes and reducing healthcare costs,” says Dr. Karen DeSalvo, Chief Health Officer at Google.

Moreover, personalized treatment plans are becoming increasingly precise thanks to Big Data. By analyzing a patient’s medical history, genetic information, and lifestyle data, healthcare providers can tailor treatments that are more effective and have fewer side effects. The use of wearable devices and IoT sensors also facilitates continuous remote monitoring, enabling healthcare providers to detect anomalies in real-time and intervene promptly.

“Wearables are no longer just fitness trackers; they are life-saving tools that provide real-time data, enabling us to monitor patients continuously and intervene when necessary,” notes Eric Topol, Director of the Scripps Research Translational Institute.

2. Accelerating Clinical Research and Drug Development

The process of drug discovery and development, traditionally a lengthy and costly endeavor, is being revolutionized by Big Data. Researchers now have access to vast datasets of biological, chemical, and clinical information, which can be analyzed using machine learning algorithms to identify potential drug candidates more quickly and accurately.

“Big Data is accelerating the drug discovery process by allowing us to sift through vast amounts of data to identify compounds with the highest potential for efficacy and safety,” says Dr. Elias Zerhouni, former Director of the National Institutes of Health (NIH).

Real-World Evidence (RWE) is also becoming an integral part of clinical trials. By analyzing data from EHRs, insurance claims, and patient registries, researchers can gain insights into how drugs perform in real-world settings. This helps in designing more effective clinical trials and ensures that new therapies are safe and effective.

Additionally, the concept of precision medicine is gaining traction. By integrating genomic data with other health information, Big Data analytics enables the development of targeted therapies that are customized to an individual’s genetic makeup. This not only increases the likelihood of treatment success but also minimizes the risk of adverse reactions.

3. Optimizing Hospital Operations and Resource Management

Big Data is transforming the operational aspects of healthcare by improving efficiency and reducing costs. Predictive maintenance of medical equipment, for example, ensures that critical devices are serviced before they fail, reducing downtime and enhancing patient care.

“Predictive maintenance, driven by Big Data, is crucial for ensuring that our medical equipment is always operational when patients need it most,” says Mike Alkire, CEO of Premier Inc., a healthcare improvement company.

Big Data also plays a crucial role in optimizing hospital staffing and scheduling. By analyzing historical data on patient admissions and seasonal trends, hospitals can better predict patient influx and adjust staffing levels accordingly. This not only reduces wait times but also enhances patient satisfaction.

“Efficient staffing and resource management, powered by Big Data analytics, are key to delivering high-quality patient care while controlling costs,” states Susan DeVore, former CEO of Premier Inc.

Supply chain management in hospitals is another area where Big Data is making a significant impact. By forecasting the demand for medical supplies, Big Data helps in maintaining optimal inventory levels, reducing waste, and ensuring that essential items are always available.

4. Enabling Preventive Care and Population Health Management

One of the most transformative applications of Big Data in healthcare is in the realm of preventive care and population health management. By analyzing demographic, socioeconomic, and health-related data, healthcare providers can identify high-risk populations and implement targeted interventions to prevent diseases before they occur.

“Preventive care is the future of healthcare, and Big Data is the key to identifying at-risk populations and intervening early,” says Dr. Robert Wachter, Chair of the Department of Medicine at the University of California, San Francisco.

Big Data analytics is also instrumental in epidemiological surveillance and disease outbreak prediction. By monitoring data from various sources, including social media and health records, healthcare organizations can predict and contain disease outbreaks before they escalate.

“Big Data allows us to monitor and respond to potential outbreaks in real-time, preventing widespread disease and saving lives,” notes Dr. Anthony Fauci, former Director of the National Institute of Allergy and Infectious Diseases (NIAID).

Chronic disease management is another area where Big Data is making strides. By analyzing patient data, healthcare providers can design personalized management programs that help patients better manage their conditions and avoid hospital readmissions.

5. Enhancing Fraud Detection and Data Security

With the increasing digitization of healthcare, the industry is becoming a prime target for fraud and cyberattacks. Big Data analytics is crucial in detecting fraudulent activities by analyzing patterns in claims data and flagging suspicious transactions for further investigation.

“Healthcare fraud is a significant issue, and Big Data analytics is our best defense against it,” says Louis Saccoccio, CEO of the National Health Care Anti-Fraud Association (NHCAA).

Data security is another critical concern. With sensitive patient information at stake, healthcare organizations must implement advanced security measures to protect against breaches. Big Data analytics enables real-time monitoring of healthcare systems, allowing organizations to detect and respond to threats quickly.

“Protecting patient data is paramount, and Big Data provides the tools we need to safeguard our systems from cyber threats,” states Theresa Payton, former White House Chief Information Officer and cybersecurity expert.

The Future of Big Data in Healthcare

As we look towards 2024 and beyond, the role of Big Data in healthcare will only continue to grow. By enabling more personalized patient care, accelerating clinical research, optimizing hospital operations, and enhancing preventive care and data security, Big Data is poised to transform the healthcare industry. However, this transformation comes with challenges, including data privacy, integration, and the ethical use of AI in healthcare.

The potential of Big Data in healthcare is immense, but realizing this potential requires a concerted effort from all stakeholders in the industry. As technology continues to evolve, the healthcare industry must embrace Big Data and the opportunities it presents to improve patient care and operational efficiency.

“The future of healthcare is data-driven, and those who harness the power of Big Data will lead the way in transforming the industry,” concludes Dr. John Halamka, President of the Mayo Clinic Platform.

In this new era, Big Data is not just a tool—it’s a catalyst for a healthier, more efficient, and more equitable healthcare system.

]]>
607118
LinkedIn Dumps CentOS In Favor of Azure Linux https://www.webpronews.com/linkedin-dumps-centos-in-favor-of-azure-linux/ Thu, 29 Aug 2024 17:26:04 +0000 https://www.webpronews.com/?p=607038 In a blow to Red Hat, LinkedIn has made the decision to migrate its servers, VMs, and containers from CentOS Linux to Azure Linux.

CentOS was a popular community Linux distro based on Red Hat Enterprise Linux (RHEL), maintaining full compatibility with its parent distro. Eventually, Red Hat took over the project, killing the distro and ending support for the most recent CentOS 8 much sooner than anyone expected. The older, but more popular, CentOS 7 was slated to go end-of-life (EOL) on June 30, 2024.

Like many companies that relied on CentOS, LinkedIn had to decide on a migration path to move its various systems to a supported Linux distro. Given how much ill will Red Hat created within the Linux community, and among organization that relied on CentOS, it’s not surprising that LinkedIn looked for a non-Red Hat solution. Given that LinkedIn is owned by Microsoft, it’s even less surprising the company opted to go with Microsoft’s Azure Linux distro.

Nonetheless, as LinkedIn’s Ievgen Priadka, Sweekar Pinto, and Bubby Rayber point out in a blog post, moving to Azure Linux helped the company meet two critical goals:

The move to Azure Linux supported two critical goals: providing a modern, secure operating system to reliably serve over 1 billion LinkedIn members worldwide; and delivering innovative new AI-powered features to members faster. Beyond these goals, other critical factors in our decision were cost-effectiveness, customization, scalability, community support, and compliance.

The team then goes on to outline the lengthy process undertaken to ensure a smooth transition, including planning, pilot programs, infrastructure preparation, onboarding teams, data migration, and more. Almost immediately, LinkedIn began to notice improved deployment speed, as well as other benefits, from the move to Azure Linux.

Azure Linux offered our teams a sense of familiarity mixed with novelty. Our core team delivered a series of prototype hosts, which came with a pre-set operating system, to our pilot teams. These hosts helped the teams get accustomed to the new OS, experiment with it, and enjoy the experience of discovering a modern operating system.

The core team also extended personalized, in-depth assistance to help internal partner teams develop compatible software packages and set up operating system components according to the unique needs of different applications. To prepare engineers for the transition to Azure Linux OS, we shared insights from the pilot programs during technical talks, team meetings and casual office conversations.

The transition significantly improved our deployment speed and system reliability, directly enhancing our ability to innovate and respond to market demands. The seamless integration with familiar tools boosted productivity, while extensive support from Azure Linux support team helped us minimize downtime. As a result, we’ve strengthened trust and confidence in our engineering capabilities across our organization, which helps us make the case for future technological advancements and gives us a competitive edge in our operations.

The company touts the “community-driven innovation” along with its relationship with Microsoft as keys to pulling off a successful migration.

The migration of LinkedIn’s fleet to Azure Linux was a strategic decision that entailed numerous considerations and challenges. Its successful execution yielded substantial benefits ranging from cost savings to enhanced security and flexibility. We achieved both critical goals: provide a modern, secure operating system to reliably serve LinkedIn members worldwide; and deliver innovative new AI-powered features to members faster.

By embracing open-source solutions, LinkedIn, in partnership with Microsoft, harnessed the power of community-driven innovation and unlocked new levels of efficiency, agility, and competitiveness. Nevertheless, careful planning, comprehensive training and ongoing support were essential to making the transition smooth and maximizing the long-term value of the migration.

LinkedIn’s entire blog post is very detailed, well worth a read, and provides valuable insights other companies can benefit from when planning a similar OS migration.

]]>
607038
83% Public Cloud Repatriation Stat is Misleading https://www.webpronews.com/83-public-cloud-repatriation-stat-is-misleading/ Thu, 29 Aug 2024 12:26:31 +0000 https://www.webpronews.com/?p=607019 A recent statistic has been making waves in the tech world, suggesting that 83% of enterprises are repatriating their workloads from public clouds back to on-premises or private cloud infrastructures. This figure, highlighted in a survey by banking giant Barclays, has sparked significant discussion across the industry. However, while the number is indeed real, many experts argue that it is highly misleading and oversimplifies a much more complex situation.

The Origins of the 83% Figure

The 83% figure has gained traction, being cited by major industry players and analysts as a sign of a potential shift in the cloud computing landscape. Broadcom’s leadership prominently displayed this statistic at the recent VMware Explore event in Las Vegas, using it as evidence to promote their virtualization software to customers frustrated with the costs and complexities of public cloud services. Even Michael Dell, CEO of Dell Technologies, weighed in on the conversation, tweeting that the statistic was “not surprising.”

But what does this number really represent? According to the Barclays survey, it indicates that 83% of enterprises are moving at least one workload from the public cloud back to their on-premises or private cloud infrastructure. At first glance, this seems to suggest a mass exodus from public clouds like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. But as with many statistics, the reality is far more nuanced.

The Misleading Nature of the Statistic

“The 83% figure is both accurate and misleading,” says Larry Walsh, CEO and Chief Analyst of Channelnomics. “It’s true in the sense that a significant number of enterprises are exploring or initiating some form of repatriation. However, it doesn’t capture the full picture of what’s really happening in the cloud space.”

One of the key issues with this statistic is that it represents companies, not workloads. “If 83 out of 100 companies each move a single workload from the cloud to on-premises, that’s still 83%,” Walsh explains. “But it doesn’t mean that 83% of all workloads are being repatriated. Many of these companies are likely moving small, non-critical workloads while keeping the vast majority of their operations in the cloud.”

Breaking Down the Numbers

To understand why the 83% figure is misleading, it’s important to delve deeper into what’s actually happening. “The statistic doesn’t distinguish between the types of workloads being repatriated,” says Andre Marsiglia, a constitutional lawyer and freedom of expression expert who also has a background in technology law. “Are these mission-critical applications or smaller, less significant workloads? The distinction is crucial.”

Moreover, the repatriation process is not a straightforward return to on-premises infrastructure. Many enterprises are opting for a hybrid cloud approach, where some workloads are moved to private clouds or co-located data centers, while others remain in public clouds. This hybrid strategy allows companies to maintain flexibility, optimize costs, and manage their data more effectively.

“It’s not a binary choice between public cloud and on-premises,” says Marsiglia. “Enterprises are looking for the best of both worlds. They want the scalability and convenience of the public cloud for certain workloads, and the control and cost predictability of on-premises or private clouds for others.”

The Role of AI and Cost Management

Another factor driving the conversation around repatriation is the rise of artificial intelligence (AI) and the associated costs of running AI workloads in public clouds. “AI is incredibly processing-intensive,” says Walsh. “While public clouds can handle these workloads, the costs can be prohibitive for some enterprises, leading them to consider moving these workloads back to on-premises infrastructure where they can manage the resources more effectively.”

This focus on AI has led some analysts, including those at IDC, to predict an increase in hardware infrastructure sales driven by AI deployments. “The belief is that enterprises will build their AI systems locally rather than rely on hyperscalers, which would drive growth in on-premises infrastructure,” says Walsh. “But again, this doesn’t mean that all workloads are leaving the cloud—just that certain high-cost, high-performance workloads might be.”

The Bigger Picture

While the 83% statistic has been widely circulated, it’s important to view it in the context of broader trends in cloud computing. Public cloud spending continues to grow each quarter, indicating that despite some repatriation, the overall reliance on hyperscalers remains strong.

“Cloud repatriation is a real trend, but it’s not the mass exodus that some are making it out to be,” says Michael Dell. “The reality is that enterprises are becoming more sophisticated in how they use cloud resources. They’re optimizing their workloads across different environments to get the best performance and cost outcomes.”

In the end, the 83% figure should be seen as an indicator of a broader conversation happening in the industry. “It’s about exploring options, not abandoning the cloud,” says Marsiglia. “Enterprises are asking themselves where they can get the most value, and for some workloads, that might mean moving out of the public cloud. But for many others, the cloud remains the best option.”

Enterprises Becoming More Strategic

The 83% public cloud repatriation statistic, while eye-catching, is highly misleading when taken at face value. It represents a complex and nuanced trend where enterprises are not necessarily fleeing the public cloud but are instead becoming more strategic in their use of cloud resources. As the cloud landscape continues to evolve, it’s crucial to look beyond the headlines and understand the full context of the data. Only then can businesses make informed decisions about their infrastructure strategies.

]]>
607019
Not So Fast: Microsoft May Not Be Killing The Control Panel https://www.webpronews.com/microsoft-may-not-be-killing-the-control-panel/ Tue, 27 Aug 2024 01:53:19 +0000 https://www.webpronews.com/?p=606927 Following evidence that Microsoft was killing off the venerable Control Panel, it appears the company may be having a change of heart.

The Control Panel has been a feature of Windows since its earliest days, providing a central place for users to configure their system, set up peripherals, and administer their computer. In recent years, there has been overlap with the new Settings app, and Microsoft updated a support document several days ago to indicate the Control Panel’s days were coming to an end.

The Control Panel is a feature that’s been part of Windows for a long time. It provides a centralized location to view and manipulate system settings and controls. Through a series of applets, you can adjust various options ranging from system time and date to hardware settings, network configurations, and more. The Control Panel is in the process of being deprecated in favor of the Settings app, which offers a more modern and streamlined experience.

Tip: while the Control Panel still exists for compatibility reasons and to provide access to some settings that have not yet migrated, you’re encouraged to use the Settings app, whenever possible.

Microsoft has since updated the support article once more, making a significant change to the wording about Control Panel’s future.

The Control Panel is a feature that’s been part of Windows for a long time. It provides a centralized location to view and manipulate system settings and controls. Through a series of applets, you can adjust various options ranging from system time and date to hardware settings, network configurations, and more. Many of the settings in Control Panel are in the process of being migrated to the Settings app, which offers a more modern and streamlined experience.

Tip: while the Control Panel still exists for compatibility reasons and to provide access to some settings that have not yet migrated, you’re encouraged to use the Settings app, whenever possible.

The new verbiage leaves Control Panel’s future far more open than it originally did. At this point it’s unclear if Control Panel will still face deprecation at some point in the future, or if the Settings app and Control Panel will coexist. If they do, it’s possible the former could be a streamlined, easy-to-use configuration tool, with the Control Panel providing more in-depth features for advanced users.

Ultimately, only time will tell if Control Panel continues to be a core part of Windows.

]]>
606927