Viewpoints – Digital IT News https://digitalitnews.com IT news, trends and viewpoints for a digital world Mon, 16 Sep 2024 15:36:08 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 Best Places to Apply Digital Innovation Across Your Enterprise https://digitalitnews.com/best-places-to-apply-digital-innovation-across-your-enterprise/ Mon, 16 Sep 2024 13:30:09 +0000 https://digitalitnews.com/?p=12031 Digital innovation is no longer a luxury—it’s a necessity. Enterprises that embrace digital transformation are better equipped to stay competitive, streamline operations, and meet the changing demands of customers. But where should you focus your efforts to ensure the greatest impact? Here are six areas to apply digital innovation in your enterprise, leveraging the power [...]

The post Best Places to Apply Digital Innovation Across Your Enterprise appeared first on Digital IT News.

]]>
Digital innovation is no longer a luxury—it’s a necessity. Enterprises that embrace digital transformation are better equipped to stay competitive, streamline operations, and meet the changing demands of customers. But where should you focus your efforts to ensure the greatest impact? Here are six areas to apply digital innovation in your enterprise, leveraging the power of application modernization and strategic software development.
  1. Modernizing Legacy Applications

One of the most impactful areas for digital innovation is modernizing your legacy applications. These systems, often the backbone of your operations, can become bottlenecks as technology evolves. Legacy applications might limit scalability, pose security risks, and reduce overall efficiency.

Research from Accenture shows that 80% of IT leaders believe modernizing legacy systems is critical to sustaining innovation in the long term. By modernizing these apps, you can enhance their functionality, improve performance, and ensure they are secure and scalable for future needs. This modernization might involve migrating to the cloud, re-architecting for better performance, or integrating new features that align with current business objectives. The benefits are clear: enhanced functionality and performance, improved security and compliance, and increased scalability and adaptability.

  1. Custom Application Development for Agility

Agility is key in today’s business environment, and custom application development allows you to create software solutions tailored specifically to your business needs. Unlike off-the-shelf software, custom applications can be designed with your unique processes and workflows in mind, offering a competitive edge.

According to a report by Gartner, organizations that focus on custom software development are 60% more likely to outperform their peers in terms of innovation and market responsiveness. Whether it’s developing multi-platform applications or designing user interfaces that engage and inspire, custom development empowers your business to adapt quickly to market changes and customer demands. This kind of agility enables companies to rapidly pivot in response to industry shifts, ensuring they remain competitive and relevant.

  1. Automation and Workflow Optimization

Automation is another crucial area for digital innovation. Automating time-consuming processes can significantly boost productivity, reduce errors, and free up your team to focus on strategic initiatives.

A McKinsey study estimates that automation can increase productivity by up to 20% in some industries. Workflow management tools, coupled with automation, can streamline operations, ensure consistency, and improve collaboration across departments. By automating repetitive tasks and optimizing workflows, businesses can achieve operational efficiency and drive growth. The long-term impact of these changes includes reduced operational costs and improved employee satisfaction, as teams can focus on more meaningful work rather than repetitive tasks.

  1. Enhancing Customer Experience Through Digital Channels

Customer experience is a critical differentiator in today’s market, and digital innovation allows businesses to create more engaging, personalized, and responsive customer interactions. According to PwC, 73% of customers say that experience is a key factor in their purchasing decisions, even more important than price or product quality.

From improving your software supply chains to developing user-friendly applications, every touchpoint can be optimized to enhance the customer journey. Implementing a human-centered approach to software design ensures that your applications not only meet functional needs but also resonate with users on a deeper level. This focus on customer experience can lead to higher customer retention rates, increased loyalty, and ultimately, greater revenue.

  1. Secure and Agile Software Engineering

In an era where cybersecurity threats are ever-present, secure software engineering practices are non-negotiable. Digital innovation in this area includes adopting DevOps practices that integrate security into every phase of the development process.

According to a study by the Ponemon Institute, organizations that integrate security into DevOps can detect and respond to security incidents 50% faster than those that do not. This approach ensures that your applications are not only built for performance but also for resilience against security threats. Furthermore, adopting modern data management practices can help maintain data integrity and ensure compliance with regulatory requirements, reducing the risk of costly breaches and compliance failures.

  1. Continuous Support and Maintenance

Finally, ensuring that your applications remain secure, responsive, and up-to-date is critical. Continuous support and maintenance services provide the monitoring and minor enhancements needed to keep your applications running smoothly.

Proactive application management can reduce downtime by up to 30%, according to research by IDC. This proactive approach helps identify potential issues before they become critical problems, ensuring uninterrupted business operations and maintaining customer trust. With continuous optimization and improvement, your enterprise can remain agile and responsive in a rapidly changing technological landscape.

Partnering for Strategic Evolution and Long-Term Success

Digital innovation is a journey, not a destination. By strategically applying digital innovation across these key areas—modernizing legacy applications, custom development, automation, enhancing customer experience, secure engineering, and continuous maintenance—you can drive significant business transformation. To make the most of your digital innovation strategies, choose an expert partner as your guide. A proven provider that offers comprehensive engineering and innovation services can help you navigate your strategic evolution, ensuring you stay ahead of the competition and achieve long-term success. Embrace digital innovation today and power your business into the future.

Learn how Forty8Fifty Labs can drive digital innovation into your enterprise here.

Related News:

Security Concerns Shaping the Way Organizations Approach DevOps

Halcyon and Verinext Partner to Close Endpoint Security Gaps

The post Best Places to Apply Digital Innovation Across Your Enterprise appeared first on Digital IT News.

]]>
Beware FishXProxy, the Ultimate Phishing Kit https://digitalitnews.com/beware-fishxproxy-the-ultimate-phishing-kit/ Mon, 09 Sep 2024 13:00:13 +0000 https://digitalitnews.com/?p=11945 SlashNext Email Security researchers have recently uncovered FishXProxy, a new phishing kit being billed on the dark web as “The Ultimate Powerful Phishing Toolkit.” Phishing kits are worrying because they offer an end-to-end phishing solution which significantly lowers the barrier to entry for would-be cybercriminals. The developers of FishXproxy claim their toolkit was created for [...]

The post Beware FishXProxy, the Ultimate Phishing Kit appeared first on Digital IT News.

]]>
SlashNext Email Security researchers have recently uncovered FishXProxy, a new phishing kit being billed on the dark web as “The Ultimate Powerful Phishing Toolkit.” Phishing kits are worrying because they offer an end-to-end phishing solution which significantly lowers the barrier to entry for would-be cybercriminals.

The developers of FishXproxy claim their toolkit was created for educational uses, but it clearly appears designed for malicious purposes. The product banner promotes FishXProxy as the “#1 Most Powerful Reverse Proxy for Phishing” with support for multiple platforms including “Gmail, QuickBooks, Office, Outlook, Yahoo, Dropbox, OneDrive… and more!”

The FishXProxy kit can overcome many technical barriers traditionally associated with phishing campaigns with clever tactics that make it easier for cybercriminals to slip through security defenses undetected. These campaigns are usually launched through uniquely generated web links or dynamic attachments to avoid initial detection. FishXProxy then further eludes security protections with advanced features such as antibot configurations, Cloudflare Turnstile integrations, page expiration settings, and more.

FishXProxy has been promoted on XSS, Breach, and Telegram, as well as in private communities that are invite-only. The kit enables attackers to quickly create realistic phishing pages that mimic a wide range of services including popular email providers, financial institutions, and other platforms that require specific user credentials. Such adaptability enables attackers to effectively target multiple platforms and achieve higher success rates.

Perhaps most concerning for security teams, FishXProxy is designed to be extremely user-friendly. The tools require minimal technical skills by simplifying all the steps needed to conduct sophisticated phishing attacks. The kit’s automated installation process, straightforward interface, and comprehensive documentation system make FishXProxy an ideal tool for inexperienced hackers who lack coding experience.

Camouflage and Subterfuge Help Mask Social Engineering Attacks

The goal of FishXProxy is to convince users to give up their credentials, and this goal is achieved through a multilayered antibot system. By building in deep layers of code, FishXProxy makes it hard for automated scanners and human researchers to recognize the underlying phishing motives behind the sites created by the kit.

Attackers can mask their intentions through bad links, malicious attachments, and even Cloudflare CAPTCHA antibot systems. And because the kit integrates with Cloudflare, it reflects an enterprise-grade infrastructure that appears to support legitimate web operations to unaware users.

In addition, FishXProxy uses a cookie-based tracking system that enables attackers to follow users across different phishing channels. In turn, such micro-targeting of individuals leads to more convincing campaigns and more persistent attacks. By tracking users across diverse campaigns, attackers have adopted FishXProxy to conduct more prolonged, multi-stage operations. Consistent cookie-naming rules across different phishing sites allow attackers to develop powerful profiles that identify repeat visitors, and then tailor future phishing content based on previous likes and interactions.

Protecting Against “The Ultimate Powerful Phishing Toolkit”

SlashNext Email Security researchers have already seen the techniques associated with this phishing kit in operation on a regular basis. Users should be aware of several signs of phishing attempts such as unusual URLs, unexpected CAPTCHA challenges, a sense of urgency or pressure in the messaging, inconsistencies in design or grammar, or unexpected email attachments, especially those containing HTML files.

Everyday users can help defend themselves from such phishing attacks by adopting multi-factor authentication (MFA), making regular updates to software and operating systems, and engaging in security awareness training. Other steps include employing email filtering, using secure browsers with phishing protections, and utilizing password managers to ensure that users only enter credentials on legitimate sites.

Of course, it remains critical to provide regular security training for employees to recognize the latest phishing threats, and maintain strong authentication protections to guard against credential thefts. Yet to stand up truly resilient security safeguards against such sophisticated multi-layered attacks, organizations will also need to deploy their own multi-layered solutions. In this environment, the only viable security option is to build in real-time threat detection across all channels spanning email, web, mobile, messaging, and collaboration apps.

To learn how SlashNext can help prevent FishXProxy cybercriminals within your organization, visit the website here.

Related News:

SlashNext’s Project Phantom Launched to Thwart Obfuscation Techniques

Executive Protection Service Launched by SlashNext

The post Beware FishXProxy, the Ultimate Phishing Kit appeared first on Digital IT News.

]]>
Security Concerns Shaping the Way Organizations Approach DevOps https://digitalitnews.com/security-concerns-shaping-the-way-organizations-approach-devops/ Fri, 06 Sep 2024 21:34:22 +0000 https://digitalitnews.com/?p=11954 Security is a major concern for software development and IT operations. Staying on top of how security shapes the DevOps landscape is crucial to business decisions.  Discover what experts have to say about the security concerns that DevOps is currently facing. Cloud Tech Adoption As enterprises increasingly adopt cloud technologies, with Gartner predicting that over [...]

The post Security Concerns Shaping the Way Organizations Approach DevOps appeared first on Digital IT News.

]]>
Security is a major concern for software development and IT operations. Staying on top of how security shapes the DevOps landscape is crucial to business decisions. 

Discover what experts have to say about the security concerns that DevOps is currently facing.

Cloud Tech Adoption

As enterprises increasingly adopt cloud technologies, with Gartner predicting that over 50% will be using the cloud by 2028, security can no longer be an afterthought. Instead, it must be seamlessly embedded into the Software Development Life Cycle (SDLC), commonly referred to as DevSecOps. This integration is so crucial that the Open Worldwide Application Security Project (OWASP) Foundation has developed maturity models to guide organizations at various stages of DevSecOps implementation.

As DevSecOps gains traction, organizations will adopt a shift-left approach, introducing security measures early in the development process. This includes integrating tools like Static Application Security Testing (SAST), open-source vulnerability scanners, and credential scanners into the build pipeline, as well as conducting threat modeling before development begins. Once deployed to production, automated tests to validate security features, along with scanning container images for vulnerabilities, will become integral to developing secure products. – Siri Varma Vegiraju, Tech Lead at Microsoft.

The Open-Source Elephant in The Room

For a long time, developers and security teams’ came to the agreement that ‘shifting left’ was the best way to prevent software supply chain compromises. Shifting left meant security evaluations were conducted earlier in the development process — often before any code is actually written.

The problem is that developers are not writing as much of their own code anymore. Software now consists of up to 90% of open-source and third-party components. As a result, many developers cannot answer the question, ‘What’s in your software?’ This leaves security teams unknowingly dealing with potential faulty software that doesn’t come to light until a breach occurs.

The open-source elephant in the room has led to security concerns that are shaping the way organizations approach DevOps. Today, more organizations are incorporating a paradigm shift in approaching security in the development process to combat today’s software supply chain attacks, called, ‘Shifting left of shift left.’ While shift left primarily focuses on early testing and quality assurance, shifting left of shift left extends this concept further by incorporating enhanced collaboration, automation and continuous improvement throughout the entire software development lifecycle. Specific steps to do so include:

  •  Understanding Risks Beyond Vulnerabilities – Ensuring that developers and security professionals understand the risks that lay hidden within the software is the first step and recognizing that vulnerabilities are only one dimension of risks. Inherent risks deep in the software supply chain can have serious consequences. Having the tools to identify inherent risks is critical.
  • Select Foundational Tools – Shifting left of shift left begins with choosing the right foundational tools to assess open-source software components. Approximately 95% of open-source vulnerabilities are found in open-source code packages that are not selected by software developers and are indirectly pulled into projects.
  • Prioritize Security in Development Tools – I encourage developers to opt for secure programming languages, frameworks, and libraries to ensure that security is integrated from the ground up.
    Implement Real-Time Solutions – To shift left of shift left, developers need more than just a testing mechanism; they need a real-time security solution consistently assessing code.
  • Developer Training – Developers need to understand pain points, signs of issues, and implications of their decisions on the overall security posture can help alleviate tensions with security team members’ and create secure code from the start.
  • Continuous Security Assessments – Security doesn’t end when the software goes live. Following development, organizations should have tools in place to conduct ongoing evaluations of code to help in the timely identification and remediation of vulnerabilities. – Nick Mistry, SVP, CISO at Lineaje.

Security is now at the forefront of DevOps, leading to the rise of DevSecOps, where security is integrated throughout the development lifecycle rather than being treated as an afterthought. Organizations are embedding security practices into their CI/CD pipelines, automating vulnerability scanning, and ensuring compliance checks are part of every stage of development.

This shift is changing the way DevOps teams operate. Developers are being trained in secure coding practices, and security teams are collaborating more closely with DevOps engineers to create secure, automated environments. The focus is now on proactive security—identifying and addressing potential threats early, before they become critical issues. As a result, DevOps has become more security-focused, with an emphasis on continuous monitoring, automated testing, and real-time threat detection.

Ultimately, security is no longer a separate function; it’s a fundamental component of DevOps, driving new processes, tools, and team structures. – Maksym Lushpenko, Founder & CEO at Brokee

Increased Security Breaches & Automated Security Testing

In an increasingly interconnected and digital world, it is no surprise that there has been a steady rise in the number and cost of security breaches over the last few years. As such, addressing security concerns is a top priority for any company, with the issue leading to a paradigm shift in the way organizations approach DevOps. Forward-looking companies are embracing DevSecOps approaches. These favor more holistic “Security by Design” practices that can enhance cyber resilience while removing conventional silos between DevOps and cybersecurity experts. In effect, DevSecOps integrates security as a shared responsibility throughout the entire DevOps process, starting from the early development stages, rather than relying on conventional security testing at the end of the DevOps lifecycle. – Andrew Pielage, Senior Software Engineer at Payara Services

One of the key enablers of this transition is certainly automation, already a pillar of DevOps. It supports automated security testing in the software development pipeline, flagging anomalies and untested code as a high-priority risk. As a result, developers can benefit from a continuous monitoring and improvement tool to identify and fix vulnerabilities earlier and deliver more secure software faster. – Abdul Rahim, Release Automation Engineer at Payara Services. 

Ultimately, thanks to DevSecOps, companies can shift from purely reactive security strategies, whereby threats and other issues are resolved, to more proactive approaches that can resolve vulnerabilities before they are exploited. This means that applications, companies developing these solutions and end users are more robust and resilient. 

The use of DevSecOps practices at Payara is playing a key role in helping the entire engineering team deliver high-quality code during rapid development cycles. Through a quality-centric, collaborative environment that leverages automation, the company successfully releases monthly software updates for its multiple platform versions to its enterprise customers. – James Hillyard, Infrastructure Engineer for IT Operations and DevOps at Payara Services

Complexity

Organizations must factor in compliance across numerous regulations and internal policies while at the same time anticipating new cyberattack techniques and challenges. Teams should work closely with compliance officers and security teams to ensure their applications meet their expectations before release.

Complexity has created a greater need for automation, but it’s also made building automation more difficult, especially if it’s an afterthought. There are now so many activities tied to DevOps automation. For example, there’s test automation, build automation and security automation. All these categories must be addressed when working to tame complexity. – Prashanth Nanjundappa, VP of Product Management at Progress

Securing Identities Across Different Systems

Securing identities across different systems has become a top priority for organizations, especially as credential stuffing attacks rise and leaked passwords flood the dark web.

As DevOps teams manage increasingly complex environments, it’s become critical to prioritize authentication methods like passkeys and multi-factor authentication (MFA) to prevent unauthorized access. This shift is driving the adoption of advanced security solutions that protect both the development pipeline and ensure resilient identity management against modern threats. –  Rishi Bhargava, co-founder at Descope

More DevOps News

The post Security Concerns Shaping the Way Organizations Approach DevOps appeared first on Digital IT News.

]]>
Summer Security Trends: Influencing Technologies https://digitalitnews.com/summer-security-trends-influencing-technologies/ Fri, 30 Aug 2024 18:45:54 +0000 https://digitalitnews.com/?p=11877 Technology plays a determining role in cybersecurity’s effectiveness and the threats it must protect against. For individuals, organizations and governments to prepare for potential threats, they need to stay up-to-date on the influencing technologies in play.  Below, security professionals have shared which technologies have influenced summer security trends and how. Generative AI  The ongoing proliferation [...]

The post Summer Security Trends: Influencing Technologies appeared first on Digital IT News.

]]>
Technology plays a determining role in cybersecurity’s effectiveness and the threats it must protect against. For individuals, organizations and governments to prepare for potential threats, they need to stay up-to-date on the influencing technologies in play. 

Below, security professionals have shared which technologies have influenced summer security trends and how.

Generative AI 

The ongoing proliferation of generative AI technologies is deeply influencing cybersecurity technologies. Existing security products on the market are proving to be highly vulnerable to deepfakes, which are being used to trick unprepared identity verification systems and fool unsuspecting employees. We’ve seen an AI arms race for detecting deepfakes, yet cyberattacks only escalate, with bad actors social engineering employees using voice, video and image deepfakes. Following a winter and spring of crippling deepfake attacks, businesses are looking to adopt stronger AI-powered cyber defenses by implementing identity verification solutions that focus not on passive detection, but on active prevention of digital injection attacks and the use of AI deepfakes. – Aaron Painter CEO at Nametag

Balancing Cybersecurity Strategy with Risk Tolerance

Relying on just one security component barely leads to actual protection. A museum can have the most advanced surveillance system in the world — but without physical measures in place, security personnel can only observe a theft, not prevent it. Along the same lines, even if businesses invest in monitoring tools, they won’t be able to actually respond to threats without an effective incident response plan and the right team to execute it.

A balanced cybersecurity strategy supports tools with people and processes, which play a crucial role in protecting infrastructure without much financial investment. For example, establishing a process that requires business users to annually review their data repository permissions can minimize your attack surface by eliminating superfluous permissions. This process-based approach that emphasizes least privilege security can be particularly helpful for SMBs, as it provides a solid foundation that can be scaled up as the business expands.Illia Sotnikov,  Security Strategist & Vice President of User Experience at Netwrix

Zero-Trust, EDR and IAM  

Typically, summer months lead to an increase in cybersecurity risks and threats due to employees traveling on vacation and having more relaxed “work from home” policies. It’s crucial that organizations have a heightened alert when it comes to gaining visibility into employee usage and access to corporate devices to pinpoint unusual behavior. Technologies such as Identity and Access Management (IAM), Endpoint Detection and Response (EDR), and Zero-Trust Architecture are being adopted by businesses to secure their organizations and employees.

The concept of Zero-Trust is being adopted by enforcing strict controls over every individual or machine, inside or outside the network, looking to gain access to the environment. “Never trust, always verify”.

EDR platforms are being adopted by organizations to gain visibility into assets, whether that be in the cloud or on-premises devices, to detect abnormal behaviors and allow for quick automated remediation.

Lastly, IAM solutions are being adopted to authorize and authenticate users, including multi-factor authentication, to mitigate unwarranted and suspicious activity. – Jeremy Ventura, Field CISO at Myriad360

Moving Target Defense (MTD)

Moving Target Defense (MTD). This is a technology that can tear down and rebuild a compute environment in seconds, making it nearly impossible for a hacker to gain persistence in the environment. MTD requires the use of containers and specific application conditions, so it demands a lot of implementation effort. However, once in place, an environment becomes extremely difficult to attack. Examples of this technology are Morphisec and Phoenix. – Andrew Plato, author of The Founder’s User Manual and Founder of Zenaciti

Security News

The post Summer Security Trends: Influencing Technologies appeared first on Digital IT News.

]]>
Technologies Influencing AI Trends This Summer https://digitalitnews.com/technologies-influencing-ai-trends-this-summer/ Fri, 30 Aug 2024 13:00:16 +0000 https://digitalitnews.com/?p=11700 Technologies play a role in how AI is implemented. Learn how tech has influenced AI trends this summer from the experts.  Manufacturing Companies Are Slowly Integrating AI AI is being investigated by many participants in the manufacturing sector, both large and small companies. However, only large companies like Siemens, FANUC, some major robotics companies, and [...]

The post Technologies Influencing AI Trends This Summer appeared first on Digital IT News.

]]>
Technologies play a role in how AI is implemented. Learn how tech has influenced AI trends this summer from the experts. 

Manufacturing Companies Are Slowly Integrating AI

AI is being investigated by many participants in the manufacturing sector, both large and small companies. However, only large companies like Siemens, FANUC, some major robotics companies, and larger automotive and aerospace firms, as well as pharmaceutical companies, can afford to implement AI meaningfully.

AI is still too early in its development cycle to have numerous ready-made applications, making it difficult to implement. AI applications need to be built on a case-by-case basis since there are no off-the-shelf manufacturing applications that use AI natively. As a result, only large companies are currently taking advantage of AI.

Despite this, there is widespread excitement about AI, with many companies starting to use it at the ChatGPT level, such as writing better marketing copy, which is an excellent use case. The challenge, however, is that AI applications are slow to develop because they require a lot of data to be effective. Manufacturing is a great industry for AI as it generates a lot of measurable data and hard facts. But most manufacturing companies are under-digitized, so medium-sized and smaller companies are rapidly trying to digitize their data and create AI-ready repositories. They know they will benefit greatly from AI once they accomplish this, but it is a big and expensive task. Consequently, adoption will be slow, except at the highest levels. – Rhonda Dibachi, CEO at HeyScottie

Hackathons Hit the Wall

2023 and early 2024 saw a raft of internal projects leveraging public AI-as-a-Service vendors for prototyping. However, the gap between prototype and productionisation will lead most of these projects to hit a wall and require working with specialized vendors who can amortize deeper R&D across many customers. – Dev Nag, CEO/Founder at QueryPal

An Influx of Point Solution Companies Implementing AI

The market is still really strong and bullish on GenAI solutions that can create new industries and categories or disrupt existing ones.

Sectors such as medicine, healthcare and financial services are seeing a massive influx of companies creating point solutions that deploy AI in new ways that create value.

For example, tons of companies in the healthcare space are using AI to create new drugs and treatments that would not exist without the massive compute and processing power available to them via the AI boom. – Matt Biringer,CEO at North

Reimaging Computing Experiences and Infrastructure

The enterprise IT sector is poised for a profound transformation driven by artificial intelligence (AI), marking a seismic shift towards more agile, fast, and cost-efficient operations. As computing power continues to advance, AI’s integration into every facet of digital work—from software development to application delivery—is reshaping traditional IT frameworks and architectures. This shift not only speeds up development processes through AI-driven tools like intelligent code completion and automated testing but also disrupts application delivery models, necessitating faster and more flexible deployment methods such as continuous integration and delivery (CI/CD). Enterprises are thus compelled to fundamentally reimagine their computing experiences and infrastructure to harness AI’s full potential. This transition towards AI-enhanced environments promises significant enhancements in productivity, innovation speed, and operational efficiency, offering a competitive edge in the swiftly evolving digital landscape. – Prashant Ketkar, CTO at Parallels.

AI-assisted Linguistic Services in Healthcare

In healthcare, applying AI technology to language and interpreting services has yet to become standard practice when assisting patients with limited English proficiency (LEP) – but that is about to change. It is common knowledge that providers are bound by law to provide linguistics services support to patients in their language of choice. That means that live interpreters typically can be found in hospitals, particularly the emergency room (ER). Beyond the ER, however, most LEP patients are on their own, trying to decipher a hospital menu to order a meal or when simply asking for help. AI can and should be considered to fill patient touchpoint gaps, especially in non-emergent medical situations. Another reason to consider AI application: America is home to 46.2 million immigrants, with over three-quarters holding legal status, marking the highest population in U.S. history as of 2022. Investment in AI-assisted language solutions can help healthcare leaders successfully address three top motives to better serve their non-English-speaking patient population: Cost, efficiency, and quality and engagement. Moreover, AI-assisted enhancements help to level up the quality of the interpreting experience and vastly improve patient compliance and outcomes. – Dipak Patel, CEO at GLOBO Language Solutions

Using AI to Automate Sales

I think one area where we are starting to see the applied use of AI is in the AdTech/MarTech vertical, which is applicable to all businesses and not just travel. Those of us who live and breathe marketing [my entire career has been in technology-based marketing]have now spent the past few years dabbling with generative AI in content creation and workflows. But now, we marketers are looking at how we can automate sales and not just marketing. We have experimented with conversational agents and chat/telephony ourselves, and have seen others experiment here too, with the corporate direction to improve conversion rates and sales success. – John Lyotier, CEO and Founder at TravelAI

Hybrid Switches

As AI continues to advance, hybrid switches that support both PCIe 5.0 and CXL 2.0 will become indispensable in the next generation of AI infrastructure. These hybrid solutions will be the key to overcoming the increasingly complex demands of AI workloads, offering the flexibility to handle both high-speed data transfer and efficient memory sharing. I predict that the adoption of hybrid switches will accelerate, becoming a standard in AI systems, enabling seamless scalability, and future-proofing AI infrastructure across industries. This shift will drive significant innovation, allowing AI applications to reach new heights in performance and efficiency. – Gerry Fan, CEO at XConn Technologies

Enhance Strategic Decision Making With AI Cost Estimation

In the rapidly evolving landscape of IT and digital engineering, we’re seeing a growing demand for cost management technology that allows businesses to streamline projects with AI-driven insights and analysis. The integration of generative AI enables users to leverage sophisticated predictive analytics and machine learning enhancements, so businesses can deliver projects on time, within budget, and with optimal resource utilization. By analyzing extensive historical data, AI models can make highly accurate predictions, learning from past projects to reduce the likelihood of cost overruns.

Its ability to learn and evolve is one of AI’s most compelling features within cost estimation. With each completed project, AI systems refine their algorithms, leading to more accurate estimates in future projects. This continuous improvement is crucial for industries where precision in cost estimation is paramount. Also, AI can continuously update estimates as projects progress and conditions change, such as supply chain disruptions or labor shortages. This approach ensures that estimates remain relevant and accurate throughout the project’s lifespan.

AI has the ability to automate routine and repetitive tasks in cost estimation, which frees up human experts to focus on the more complex and strategic aspects, enhancing overall efficiency. AI also excels in taking into account the unique requirements of each project, including local labor and material costs, to tailor estimates accordingly, ensuring estimates are accurate and relevant to the specifics of each project.

While AI offers a range of advantages in cost estimation, it’s crucial to approach its adoption with a balanced perspective, acknowledging its potential benefits and limitations. Integrating AI in cost estimation is not just about adopting new technology; it’s about enhancing the strategic decision-making process in project management. – Charles Orlando, Chief Marketing Officer at Galorath Incorporated

Federated Learning

Federated Learning is an innovation that is very interesting. Instead of taking all data to one main place for processing, this method lets different devices or servers work together without sharing the raw data directly. It’s a big win for privacy and security, and businesses are loving it.

Federated Learning smartly fixes privacy concerns about data. It allows businesses to use AI capabilities while protecting personal information securely. – Erik Severinghaus, Founder and CEO at Bloomfilter

Edge AI: Smarter Devices Without the Wait

So, you know when you ask your phone how to go somewhere, and it takes a long time to answer? Edge AI is making this better by putting the smart thinking directly on your device. This means quicker replies, less information stored in the cloud, and improved privacy. Shops are using it to guess what you might wish to purchase before you even realize it yourself, making shopping easier and more tailored for each person. It feel like your phone or favorite shop know you more than you know yourself! – Ghazenfer Monsoor, Founder and CEO at Technology Rivers

Organizational Use of AI Forensics and AI Visibility

The technology to secure the generative artificial intelligence (GenAI) organizations are now leveraging has only been around since the first half of 2024. While GenAI’s adoption has become widespread and organizations are seeing its potential for business value, we’re also still learning about the negative impacts of GenAI, how to avoid them along with security risks, and how to ethically harness GenAI’s power.

As we get further into 2024, organizations are going to need to take more proactive approaches to their GenAI applications and strategies to see the full benefits. One example is ensuring both AI forensics and AI visibility capabilities are available across all internal networks. This would look like auditing capabilities of all AI prompts and applications, including traceability, transparency, compliance, and risk management.

Should the worst happen, AI forensics could be a game-changer for organizations by giving them clear visibility into potential risks, tools being used, and who used them, as well as the prompts ingested by the AI models.

Organizations are finding out they cannot manage what they can’t see, making AI forensics and AI visibility a top priority for those looking to ensure even approved GenAI applications don’t pose a potential threat to security posture. –  Arti Raman, CEO and founder at Portal26

Utilizing AI to Improved User Experience

We are now in the phase where the rubber hits the road, lots of customers are realising the promises of AI changing the way they operate was more hype than Truth. The only businesses that have benefited from the huge hype of LLM/Chat GPT etc are the ones that were selling “Shovels in the gold rush” which are Microsoft azure, AWS, Databricks, etc.

But, that said we will see some revolutionary products that are based on improving user experience become even bigger and capturing more market. To take an example perplexity.ai, it is a serious challenger to google. Perplexity with its unique combination of blending search with the power of Large Language models is an awesome win for a new age company battling the behemoths. – Shubh Chatterjee, Founding Scientist at ALgoxlab LLC

 Continued Progress in Quantum AI

I would like to highlight Quantum AI – it’s bound to be a true game-changer in computing. Although still theoretical, combining the principles of quantum mechanics with AI will allow us to process information at speeds and efficiency far beyond traditional computers. This opens the possibility of AI on the proverbial steroids. This is because quantum computers use qubits, which can exist in multiple states at once, such as one and zero, resulting in exponential computing power enabling them to solve complex problems much faster. This capability will enhance AI’s ability to analyze and predict outcomes. It might be the road that takes us to the much-discussed GAI or General Artificial Intelligence – the kind we’ve only seen in sci-fi movies not actual product demos.

The progress of quantum computing has been slow due to the specific conditions required to develop and operate qubits. However, this year has been a banner year for quantum computing, with exciting breakthroughs happening just this summer. Researchers have made significant progress in overcoming a major hurdle: creating stable qubits. One approach utilizes femtosecond lasers for precise manipulation. 

Another breakthrough involves manipulating defects in a silicon crystal lattice, using lasers to create high-quality qubits in silicon by introducing hydrogen atoms into defects. This technique allows for not only creation but also erasure of qubits – key for a more controlled and reliable system.

While it’s still challenging to get qubits to “talk” to each other, for example, these advancements represent significant progress in building a functional quantum computer.

To give you a business case of quantum AI and computing, it could revolutionize pharmaceutical R&D. Traditionally, drug discovery has been painfully slow and expensive. This is because it involves analyzing massive datasets and simulating countless molecular interactions in different scenarios. Quantum AI could accelerate this process severalfold by performing these simulations more efficiently and accurately. Basically, this would allow us to identify promising (and potentially – much more efficient) drug candidates much faster and at a lower cost, which could revolutionize how we develop new medications leading us to genetically personalized medicine etc.- Ilia Badeev, Head of Data Science at Trevolution Group

Data Architectures

In the burgeoning era of data dominance, businesses are keenly pursuing AI integration as a competitive lever, recognizing the necessity of modernizing data architectures to harness the full potential of Generative AI (GenAI) and advanced analytics. This imperative drives a demand for vendors who can deliver foundational technologies—such as robust data management, rapid data transfer, and reliable disaster recovery. As concerns over GenAI misuse persist, the need for secure, recoverable AI data becomes paramount, necessitating advanced data migration technologies and real-time cloud replication to support near-zero recovery time objectives (RTO) and recovery point objectives (RPO). Companies like Microsoft Azure and AWS are pivotal in demystifying AI and crafting tailored AI strategies for businesses, ensuring a seamless blend of AI into their strategic and technological frameworks. Over the next five years, as firms increasingly focus on monetizing AI-driven applications, those vendors that prioritize customer monetization outcomes and can efficiently move, protect, and recover large data sets will likely emerge as leaders. This shift emphasizes not only the technical integration of AI but also strategic alignment with business goals to optimize investment and maximize returns from GenAI initiatives. – Paul Scott-Murphy, Chief Technology Officer at Cirata

Open-source Decentralized AI 

The current trends are 100% around developing an open-source decentralized AI model. The fact that large companies can skew the input models is leading to a full-court press to build out a totally open-source product. Many Depin platforms are a natural fit to deploy this robust decentralized AI model. The future of the people depends on unadulterated input models to ensure rock-solid output models. – Daniel Keller, CEO & Co-founder at InFlux

AI News

The post Technologies Influencing AI Trends This Summer appeared first on Digital IT News.

]]>
From Clicks to Conversion: The Overlooked Role of DNS in Business Success https://digitalitnews.com/from-clicks-to-conversion-the-overlooked-role-of-dns-in-business-success/ Tue, 20 Aug 2024 13:00:21 +0000 https://digitalitnews.com/?p=11759 A fast, reliable website can significantly enhance user experience, drive customer retention and boost sales. By contrast, a slow or unreliable website can spell disaster for business, detracting from revenue, brand value and user trust. Website performance plays a critical role in determining the outcome of a business—nearly half of all customers (46%) will not [...]

The post From Clicks to Conversion: The Overlooked Role of DNS in Business Success appeared first on Digital IT News.

]]>
A fast, reliable website can significantly enhance user experience, drive customer retention and boost sales. By contrast, a slow or unreliable website can spell disaster for business, detracting from revenue, brand value and user trust. Website performance plays a critical role in determining the outcome of a business—nearly half of all customers (46%) will not return to a website if they experience poor loading time, according to Tech Report. Slow or failed website loading can lead to missed business opportunities as potential customers abandon their visits. Dunn & Bradstreet recently found that 59% of Fortune 500 companies endure over one hour of downtime each week, averaging a weekly cost ranging from $643,200 to $1,056,000.

The Domain Name System (DNS) is a fundamental internet protocol that translates human-friendly domain names into unique Internet Protocol (IP) addresses — a process essential for directing users to the correct websites quickly and accurately. DNS helps maintain the seamless operation of the Internet, enabling efficient access to online resources. We only see the finished result of the page – a fully formed piece of content in a single place. However, beneath that are typically hundreds of DNS connections. During peak times such as Cyber Monday, Black Friday, Superbowl Sunday, or the upcoming Paris Olympics, DNS is the underrated superhero helping ensure all these connections happen at times of high demand.

There has been a significant gap, however, in the availability of reliable data on the performance of various DNS solutions. This is because measuring DNS performance is highly complex because real-world DNS connections are influenced by many variables, such as the local ISP connection, the distance between the user and server, or the resolver proximity. Moreover, since there is no universal standard for measuring DNS performance metrics, different methodologies across providers can lead to inconsistencies and make it difficult to compare results. Without access to standardized metrics, businesses often make investments in “good enough’ solutions, which reliably answer queries but not optimally.

Monitoring DNS performance is essential to maintaining a healthy, optimized online presence that prevents potential service disruptions or slowdowns for users. Collecting authoritative DNS performance data, as mentioned, however, is difficult. It requires continuous monitoring for real-time visibility and a distributed set of vantage points to ensure optimized user experiences across diverse locations. However, this can be resource-intensive and requires global, strategically positioned network infrastructure and advanced data processing capabilities.

IBM NS1 Connect and Catchpoint recently conducted a collaborative study to address this gap, aiming to provide more reliable and detailed DNS performance metrics to businesses and to compare the performance of authoritative DNS providers. The study measured DNS speeds for over 2,000 popular websites during peak traffic times in November and December (the peak holiday season), comparing the performance of authoritative DNS providers and self-hosted DNS architectures, ultimately uncovering substantial and surprising performance differences. The DNS performance study encompassed a wide range of data sources and geographic regions, leveraging Catchpoint’s global observability network to monitor DNS performance in real-time, providing extensive worldwide coverage and high granularity to determine key metrics such as the global average DNS response time, regional variations from different DNS providers, and variations by DNS provider types.

The key takeaways?

In numbers
• The average DNS response time across the websites we studied was 263ms.
• Response times in Europe and North America are significantly faster than in other regions, both hovering around 100ms.
• The slowest response times were in Oceania at around 350ms.
• Self-hosted DNS response time was 35% slower than the average global response time (an average difference of 141ms).
• The difference between self-hosted DNS and the leading DNS provider, IBM NS1 Connect, is self-hosted is 60% slower, a 244ms difference.

Don’t go it alone
Implement a managed DNS solution. A comparison of managed DNS options with self-hosted setups demonstrated distinct performance differences. Self-hosted DNS response time was in fact 35% slower than the average global response time across testing. Managed DNS solutions tended to provide more reliable and faster responses, enhancing overall user experience for businesses.

Consult a map
Be aware of regional variation. The study uncovered significant regional differences in DNS performance, with different continents showing varying results. North American and European countries performed best, due to factors such as higher infrastructure density, including a greater concentration of data centers and DNS servers, robust internet backbones, more connectivity between major cities, and closer proximity to DNS servers—factors an IT team should take into consideration when evaluating their individual performance.

Continuously improve
Create regular touchpoints for teams to review and optimize DNS infrastructure on an ongoing basis. Assess the difference between premium and self-hosted DNS services and between DNS providers. Performance can vary greatly across them all. Regularly measure and review DNS performance, identify bottlenecks and highlight areas where improvements are needed. This work will pay off in the long run.

Your users deserve a consistently positive experience, and with a robust DNS infrastructure in place, you can minimize downtime and enhance customer experience across the board. Monitoring DNS matters. Improvement of DNS performance directly contributes to better business performance by reducing potential slowdowns or service disruptions, and speeding up load times, leading to better user experiences. By leveraging the latest comprehensive data in the study, business can benchmark their DNS performance against reliable performance metrics, identify areas for improvement and ensure that website clicks convert to customers.

Related News:

Live Internet Outage Map for Real-Time Internet Health Released

2024 SRE Report Reveals Current Status of Site Reliability Engineering

The post From Clicks to Conversion: The Overlooked Role of DNS in Business Success appeared first on Digital IT News.

]]>
How to Prevent a CrowdStrike IT Outage Repeat https://digitalitnews.com/how-to-prevent-a-crowdstrike-it-outage-repeat/ Wed, 14 Aug 2024 13:00:37 +0000 https://digitalitnews.com/?p=11675 A CrowdStrike software issue caused widespread problems with its Falcon Sensor product. This IT outage caused by a content update affected millions of Windows hosts across multiple industries worldwide.   Let’s talk about the cause of the CrowdStrike issue, what unscathed companies did right, and what professionals have to say about preventing this from happening again. [...]

The post How to Prevent a CrowdStrike IT Outage Repeat appeared first on Digital IT News.

]]>
A CrowdStrike software issue caused widespread problems with its Falcon Sensor product. This IT outage caused by a content update affected millions of Windows hosts across multiple industries worldwide.  

Let’s talk about the cause of the CrowdStrike issue, what unscathed companies did right, and what professionals have to say about preventing this from happening again.

What Caused the Software Issue: Lax Software Testing Processes or More?

Many believe adequate software testing would have prevented this catastrophe. However, others have concluded that multiple layers of bugs caused the issue, which is more difficult to catch in a fully automated testing system. 

Even testing for one minute would have discovered these issues …In my mind, that one minute of testing would have been acceptable. – Kyler Middleton, senior principal software engineer at Veradigm

Testing continues to be a significant point of friction [in application development]…Software quality governance requires automation with agile, continuous quality initiatives in the face of constrained QA staff and increasing software complexity…Software testing, both for security and quality, appears to be among the most promising uses for generative AI in other IDC surveys…I am hopeful that the next few years will see improvements in these statistics…However, AI can’t fix the lack of or failure to follow policy and procedures. – IDC analyst Katie Norton

The CrowdStrike flaw was caused by multiple layers of bugs. That includes a content validator software testing tool that should have detected the flaw in the Rapid Release Content configuration template — an indirect method that, in theory, poses less of a risk of causing a system crash than updates to system files themselves …This is a challenge in fully automated systems because they, too, rely on software to progress releases from development through delivery … If there’s a bug in the software somewhere in that CI/CD pipeline … it can lead to a situation like this. So to discover the testing bug in an automated way, you’d have to test the tests. But that’s software, too, so you’d have to test the test that tests the tests and so on. – Gabe Knuth, analyst at TechTarget’s Enterprise Strategy Group.

How Some Companies Went Unscathed

Not every company that got the blue screen of death had to shut down. Some had procedures in place that helped them recover relatively quickly.  

We’ve really focused on business continuity, redundancies, safety nets, and understanding of the difference between cybersecurity as a task and cybersecurity as a cultural commitment of your organization…It’s a validation of our investments while so many of our peers were languishing…The redundancies are numerous…They’re not necessarily terribly sophisticated, but we have literally gone through and said, ‘What are the critical systems of our organization? What is the interplay between them? And if it comes crashing down, what is the plan?’…The reality for cybersecurity and business continuity is the work [must be]done well ahead of the disaster. It has to be part of the fabric of your company, like compliances, like customer service…It’s hard to celebrate cybersecurity—except for the days when you’re the only ones not sweating it. – Andrew Molosky, president and CEO of Tampa-based Chapters Health System

Professionals Input on Preventing A Repeat 

Everyone wants to avoid a repeat. Below is some advice from professionals on preventing this from happening again. 

Phased Check-ins on Endpoint Health

I’m incredibly surprised, even though they call it ‘Rapid Response,’ that [CrowdStrike] doesn’t have some phased approach that allows them to check in on the health of the endpoints that have been deployed … Even with some logical order of customer criticality, they could have circuit breakers to stop a deployment early that they see causes health issues. For example, don’t [update]airlines until your confidence level is higher from seeing the health of endpoints from other customers. –  Andy Domeier, senior director of technology at SPS Commerce

Move Away from Auto-deploying Kernel Module Updates

It is absolutely irresponsible to auto-deploy a kernel module update globally without a health-mediated process or, at least, a recovery path at a lower level of the control plane … Something that remains functional even if the OS deployed on top crashes. – David Strauss, co-founder and CTO at Pantheon

Eliminate Unmanageable Endpoint Complexity

The Windows endpoint environment has reached the point of unmanageable complexity. A steady stream of updates and layering of security features has created a web of complexity that is difficult to manage or fix and therefore promotes risk. Moving Windows to the cloud and replacing the endpoint with a secure by design operating system, such as IGEL OS, can simplify management through centralization and aid in recovery should an outage or breach occur saving millions of dollars in lost productivity. We have grown somewhat numb to the steady stream of data breaches. This latest incident of the shepherd turning on the metaphorical sheep it was protecting highlights that we must consider approaching this problem differently. The move to Windows 11 and the opportunity for cloud transformation, along with the proliferation of SaaS, are proven technologies that can enable a much more secure endpoint strategy. – Jason Mafera, Field CTO at IGEL

Platform, People and Process in Software Testing

It’s not sufficient to just have a great software platform. It’s not sufficient to have highly enabled developers. It’s also not sufficient to just have predefined workflows and governance. All three of those have to come together – Dan Rogers, CEO at LaunchDarkly

Balance Security With Tight Deadlines 

What you don’t want to have happen now is that you’re so worried about making software changes that you have a very long and protracted testing cycle and you end up stifling software innovation  – Dan Rogers, CEO at LaunchDarkly

Security News

The post How to Prevent a CrowdStrike IT Outage Repeat appeared first on Digital IT News.

]]>
Transforming Data Center Security with Searchable Encryption https://digitalitnews.com/transforming-data-center-security-with-searchable-encryption/ Mon, 22 Jul 2024 17:00:51 +0000 https://digitalitnews.com/?p=11516 The world has evolved to become extremely cyber-reliant. And the Data Centers (DC) that comprise the cloud, GenAI/AI, CRM software, entertainment, financial, government and healthcare operations are now ground zero for targeted Stuxnet-like attacks—which would have catastrophic results. If a cluster of hyper scale DCs were to go offline, it could stop the US economy—and [...]

The post Transforming Data Center Security with Searchable Encryption appeared first on Digital IT News.

]]>
The world has evolved to become extremely cyber-reliant. And the Data Centers (DC) that comprise the cloud, GenAI/AI, CRM software, entertainment, financial, government and healthcare operations are now ground zero for targeted Stuxnet-like attacks—which would have catastrophic results.

If a cluster of hyper scale DCs were to go offline, it could stop the US economy—and potentially global economy—in its tracks, much like the 9/11 World Trade Center attacks stopped air travel. Today, Integrated Data Center Management (IDCM) implementations prepare and build out redundant systems to maintain all functional operations, including risk management and mitigation. At the center of the IDCM is Operational Technology (OT) command and control data.

Data Center Security Risks

We often look at processing, cooling, connectivity, and access as primary DC concerns. In reality, electricity is the main chokepoint. The disruption or complete loss of, core infrastructure electrical power is the most critical DC failure. We already know that the the US’s critical infrastructure, which includes the electrical grid, is currently at risk.

Because of this, DCs have mitigated electric grid power source failures by building out massive on-site power generation facilities. Even with these mitigation efforts, what if the chillers/cooling systems lost power, then the standby power did not start causing transfer batteries to quickly drain out leaving no time for a controlled shut down? Almost immediately, the residual heat without cooling will reach temperatures capable of destroying racks of critical equipment.

The Impact of an Attack

We don’t have to go back far in time to see how such a DC catastrophic failure could happen. The Stuxnet computer work was focused on sabotaging the specific industrial control systems used in Iran’s nuclear enrichment facilities, particularly the Siemens Step7 PLCs. The threat actor leveraged a “blind spot” that resulted in a cyber-attack that was successful without any detection from the monitoring systems.

In this instance, none of the cyber security detection solutions were alerted because it did not come in from the Internet or injected by a thumb drive. It was delivered as part of a routine spare part replacement, which is basic maintenance.

With access to the intranet from an OT device, the worm quickly spread to other computing devices. Stuxnet was a sophisticated worm representing four zero day attacks with a checklist of activities. To achieve its goal, Stuxnet needed to understand the specific configurations and operational details of the PLCs and where to find the targeted Siemens Step7 PLCs.

Stuxnet started its reconnaissance by collecting specific information about the systems it infected. This information included details about the configuration and operation of the industrial control systems stored in unencrypted, active SQL databases. However, the primary goal was not to steal data, but to leverage the collected information (data) to learn and refine its sabotage operations.

Stuxnet’s primary operations were focused on manipulating the PLCs and the real-time data being fed to SCADA systems; it did interact with internal SQL databases. It accessed and modified/manipulated configuration data that indirectly affected operational databases by ensuring they logged falsified data. However, it did not target traditional IT databases for data theft or direct manipulation.

Evolving Threats to Data Centers

Stuxnet and its variants, Duqu and Flame, in addition to new threat variants including those based upon GenAI, pose a threat to IDCM software applications and their active, on-demand data. The majority of that data is stored in plaintext to support active operations.

The Mitre Att&ck map starts with reconnaissance whereby hackers, once in, search for data and possible defenses. Once past the defenses, plaintext SQL databases can be manipulated, stolen or invisibly controlled by remote threat actors.

Today’s most dangerous IDCM application threats are SQL injection and cross-site scripting. SQL injection tools enable hackers or worms to automate their attack processes and quickly exploit vulnerabilities. It’s called “SQL” injection because the adversary is trying to find a vulnerability in the application to directly talk to the SQL database, bypassing application safeguards.

Ending DB Attacks With Encryption

But, what if the IDCM application spoke to its critical data through an API which can be secured and monitored?

With new Searchable Encryption technology, users can perform computations on AES-256 encrypted data while the data remains fully encrypted. Solutions based on Searchable Symmetrical Encryption allow for database operations (create, read, update & delete) without needing to decrypt that data.

If the Iranian’s nuclear enrichment facilities SCADA systems had Searchable Encryption, the Stuxnet reconnaissance effort would have revealed no usable data and could have stopped the attack immediately. The entire intranet and its devices would have remained concealed behind encrypted data.

A Searchable Encryption solution aligned with the Federal Data Center Enhancement Act of 2023 use of technology requirements, “regularly assess the application portfolio of the covered agency and ensure that each at-risk legacy application is updated, replaced, or modernized, as appropriate, to take advantage of modern technologies”.

Today all data centers are at risk, vulnerable to IoT & OT data that is not secure. A Stuxnet-like worm or a skilled hacker will find many challenges in searching for data necessary to execute their attacks upon a system with Searchable Encryption. It’s time to ensure our data centers are secure-by-design since they are critical infrastructure with many attack vectors, motivated attackers, and most of the world’s critical data.

Related News:

Crusoe to Build AI Data Center at the Lancium Clean Campus

The ScienceLogic Platform Delivers New AIOps Enhancements

The post Transforming Data Center Security with Searchable Encryption appeared first on Digital IT News.

]]>
Cloud Storage Trends to Stay on Top Of   https://digitalitnews.com/cloud-storage-trends-to-stay-on-top-of/ Wed, 10 Jul 2024 13:00:06 +0000 https://digitalitnews.com/?p=11363 Technology is constantly in flux. To stay on top of cloud storage trends, businesses considering adding cloud storage need accurate information about this movement. Below, companies in the cloud and data storage space share the cloud storage trends they are seeing so you can be more informed. Take a look!  Emerging Technology Influencing Cloud Storage [...]

The post Cloud Storage Trends to Stay on Top Of   appeared first on Digital IT News.

]]>
Technology is constantly in flux. To stay on top of cloud storage trends, businesses considering adding cloud storage need accurate information about this movement. Below, companies in the cloud and data storage space share the cloud storage trends they are seeing so you can be more informed. Take a look! 

Emerging Technology Influencing Cloud Storage Trends This Summer 

Artificial Intelligence 

The rapid growth of Artificial Intelligence (AI) across industries is influencing the cloud storage trends this summer. Technologies like Machine Learning (ML) and Generative AI (GenAI) depend on the ability to access and manipulate large datasets in the cloud, requiring large capacity, storage flexibility and great throughput. Cloud Storage solutions are therefore evolving rapidly driven by the need for better performance, security and cost-efficiency. Cloud-native and the increasing use of serverless computing in event-driven applications and microservices are also shaping the storage technology landscape. Many Businesses are adopting a hybrid and multi-cloud setups are developing strategies that offer flexibility and improve resilience, while avoiding a vendor lock-in by the large public cloud providers. – Efrain Ruh, AIOps Expert and CTO at Digitate

AI/ML is a primary driver for moving data to the cloud from its initial location, which might initially reside on-premises or another cloud resource. After moving the data, the next step is to transform it into a format that can be consumed for a particular workload. Often, the initial transformation is followed by a second or third transformation to meet security or compliance requirements.

Transformation is a euphemism for using compute resources. Generally, compute is the most expensive resource in the cloud, but that must be calculations regarding the data life cycle. For instance, it may be cheaper to retransform data again, rather than pay storage fees. It is entirely situationally dependent.

Each transformation will require an appropriate data life cycle policy to be applied to it to minimize costs. A common requirement is that the data be moved from colder to warmer storage (and back again) based on the needs of the AI/ML workload so it can be used for future training activities or additional transformations. – David Christian, Global Migration Lead at DataArt

Cloud-first Policy Adoption

Leading into the summer season, ‘cloud-first’ has become a widely-adopted rule for companies that want to compete in the data-driven economy. With data only increasing in volume, the massive cost savings afforded by the cloud make it impossible for many organizations to opt for on-premises data centers. Of course, there are always exceptions to the rule. Still, today, cloud providers offer the security, flexibility, and often even the data residency requirements needed for any company’s unique circumstances.

In today’s collaborative data environment, where data is shared between departments, team members, and even satellite organizations, an on-premises solution can’t match the scalability and efficiency of the cloud. Disparate data was once a major stumbling block. Still, the ability for data platforms to do away with siloed and connected data from wherever it is has been addressed effectively by the cloud in a way that on-premises solutions can’t. – Sharad Varshney, CEO at OvalEdge

Software-defined Storage

One technology that’s not necessarily new but is becoming increasingly important is software-defined storage. Think of it this way: You’ve got lots of data in storage—a video archive, for example—that you don’t access often. There’s always an access pattern that emerges around this kind of data. But sometimes, those patterns change depending on what type of content people need at the moment. Traditionally, archivists notice that change and manually move some of the archival data into hotter storage so people can access it faster and cheaper. Software-defined storage builds frameworks to automate that process through scripting or AI to optimize for cost and performance.  – Majed Alhajry, technology, business process, and software development leader at MASV

Security Concerns Shaping Organization Approach to Cloud Storage 

Client Misconfigurations

A significant proportion of cloud security breaches are due to client misconfigurations, which are often driven by a lack of cloud expertise. That’s why some cloud providers have moved to the shared responsibility model. This model stakes out a middle ground between cloud providers dictating everything you can and cannot do, on one hand, and leaving customers to fend for themselves on the other. Shared responsibility means cloud providers implement sensible defaults—such as strong password enforcement or ensuring new storage buckets aren’t made public by default—while allowing customers the flexibility to configure their storage to suit specific use cases. – Majed Alhajry, technology, business process, and software development leader at MASV

Cloud Governance

Cloud governance is always an important element in any enterprise cloud implementation. Using cloud-native tools, Config or SecurityHub in AWS, Defender for Cloud in Azure, and Security Command Center in GCP allows you to see the security state of all storage repositories. It reports on questions like: Is the repository encrypted? Is the repository encrypted in a cost-efficient way? Does the repository have a life-cycle policy assigned to it? Does the repository restrict access from the Internet or internally? Are the policies that allow access to the repository the least privileged? Are permanent access keypairs disallowed or severely restricted?

Finding a repository that is out of compliance will mean scheduling it for a change to meet compliance needs. Creating new, out-of-compliance repositories is generally prohibited by policy. – David Christian, Global Migration Lead at DataArt

The Role Environmental and Sustainability Plays in Cloud Storage Trends 

Optimizing Resource Use

Environmental and sustainability considerations in cloud storage often focus on optimizing resource use. Cloud providers therefore allow users to select the best type of storage and deliver technological solutions that allow them to move data to more cost-effective platforms with ease, reducing not only their carbon footprint but also costs. – Efrain Ruh, CTO / Cloud Management Professional at Digitate

Centralization of Cloud Storage

There’s a lot of greenwashing in the cloud storage space, but there is merit to some of it. For example, the centralization of cloud storage is one of its most important sustainability features: If everyone in the cloud decided to build their own data centers, the amount of space and other resources required would far exceed what they’re using in the cloud. The capacity of the cloud is also higher due to economies of scale, which means you can store more gigabytes per cubic foot, which means less need for cooling, silicon, and other resources. Some public clouds have even started using underwater data centers, which use ocean water as a cooling method and require far less power. – Majed Alhajry, technology, business process, and software development leader at MASV

Efficient Allocation of Resources

According to reports, 60% of all corporate data is currently stored in a public cloud. Cloud providers have economies of scale within their data centers that simply cannot be matched by corporate data centers. In a data center, the tendency is to leave the compute resources, bare metal, and virtual machines on 7×24 in case they might be needed to process data. In the cloud, from a customer’s point of view, when the data needs to be processed in some way, the compute is enabled, the data is processed, and the compute is turned off. From the cloud provider point-of-view, what is actually happening is the compute is reallocated to other customers, but the overall carbon footprint is reduced globally due to a more efficient allocation of resources. – David Christian, Global Migration Lead at DataArt

The Influence of Remote Work and Hybrid Work on Cloud Storage Trends

Flexible and Scaleable Storage

The shift towards remote and hybrid work models has pushed for more flexible and scalable cloud storage solutions. As teams continue to work remotely, there is a higher reliance on collaborations tools like MS Teams, Slack, Google Workplace, etc, requiring a robust cloud storage solution. Remote work often introduces security vulnerabilities, making data protection a top-level concern. Having confidential organization data being accessed from multiple locations and devices increases the risk of an attack or a breach. Effective backup and recovery capabilities are also crucial to minimize risks on a hybrid work model. –  Efrain Ruh, CTO / Cloud Management Professional at Digitate

 Expanded Geographic of Workforce

Hybrid and remote work makes cloud storage a necessity, especially if you have a geographically spread-out workforce. That geographic spread can create significant expenses for on-prem organizations that need employees to access storage from anywhere with low latency. You also can’t provision on demand with on-prem storage—you have to provision for the worst-case scenario, just in case—so companies constantly overpay for capacity they don’t usually need. The economies of scale built into cloud storage suit hybrid work models because they allow organizations to scale up and down quickly without requiring significant CapEx. – Majed Alhajry, technology, business process, and software development leader at MASV

Recent Unexpected Uses for Cloud Storage

Storage as a Service

Several use cases for cloud storage have emerged beyond traditional data storage. One example is Storage as a Service (STaaS), a solution that organizations are starting to adopt to reduce complexity and increase efficiency through a consumption-based as-a-service model with increased levels of automation. – Efrain Ruh, CTO / Cloud Management Professional at Digitate

AI Payloads and Training Data

The cloud is very well suited for AI payloads and hosting AI training data use cases, which require rapid access to data and large amounts of sequential reads. Cloud storage is well-suited, efficient, and cheap for these use cases. It gets very expensive to have a training data corpus stored on prem—those drives must be spinning all the time to provide on-demand access, even though you’re not training your model at all times. With hot storage in the cloud, you get that access on demand, and access to that data is usually free, so you’re only paying for the storage element. This applies across most industries. – Majed Alhajry, technology, business process, and software development leader at MASV

Consolidating Compute Resources

We’ve been seeing organizations that initially took a multi-cloud approach, begin to reconsider and consolidate into a single cloud. Overcoming data gravity is real, egressing data between clouds or even regions within a cloud is more expensive than originally calculated. Putting all compute resources in local proximity to data repositories has been a trend recently because it is more efficient. –  David Christian, Global Migration Lead at DataArt

Cloud News

The post Cloud Storage Trends to Stay on Top Of   appeared first on Digital IT News.

]]>
Addressing the Labor Shortage in the Field Service Industry https://digitalitnews.com/addressing-the-labor-shortage-in-the-field-service-industry/ Mon, 17 Jun 2024 17:00:38 +0000 https://digitalitnews.com/?p=11103 What is the Role of AI in Future-Proofing Trades and Attracting Young Talent 550,000. That’s how many plumbers the US is expected to be short of in 2027, according to an analysis by John Dunham & Associates. This shortage is not confined to a single trade but spans across multiple critical trade jobs such as [...]

The post Addressing the Labor Shortage in the Field Service Industry appeared first on Digital IT News.

]]>
What is the Role of AI in Future-Proofing Trades and Attracting Young Talent

550,000. That’s how many plumbers the US is expected to be short of in 2027, according to an analysis by John Dunham & Associates. This shortage is not confined to a single trade but spans across multiple critical trade jobs such as plumbers, electricians, and carpenters. Additionally, the field service industry – the workers responsible for installing, maintaining, and repairing complex equipment and machinery – faces a critical labor shortage exacerbated by an aging workforce and a widening skills gap. In North America, 46% of field techs are over 50 years old, and looking to retire within the next decade.

But, there’s a promising shift. Recent reports highlight that many Gen Z workers are now opting for trade careers over traditional college pathways. The steep cost of college is not the sole motivator steering young people towards skilled trades. As AI becomes more prevalent, many Gen Zers perceive manual labor as less susceptible to full replacement by emerging technologies compared to many white-collar jobs.

While this demographic recognizes that AI won’t completely replace these roles, they see the value in utilizing AI to assist with their tasks. For field service specifically, AI can help by simplifying diagnostics and enhancing troubleshooting, making these roles more accessible and appealing to the next generation.

Additionally, AI’s ability to train new workers on technical skills contributes to the sustainability of the workforce and the overall success of the company. Recent data indicates that job mobility is common across various age groups, particularly among younger workers. Organizations need a method to preserve their workforce’s knowledge when workers inevitably leave, and AI provides a solution for this need.

It also helps mitigate the cost impact associated with underperforming talent; studies such as Aquant’s field service benchmark report indicate that the least effective field service employees can cost an organization 80% more than their most effective counterparts. In contrast, if every employee had the knowledge and skills to perform like the top 20% of the workforce, service costs would be reduced by as much as 22% – and this is achievable, with the help of AI.

How AI is training workers

AI assistance is training and upskilling less-experienced workers in real-time, bringing their efficiency up to par with their skilled counterparts. However, when it comes to sectors like field service, organizations will need a customized approach. Every service challenge is one of a kind and generic AI answers won’t cut it. Certain AI technologies out on the market – ones that take a personalized approach and are built for specific use cases – will collect and process data from multiple trusted sources, including expert insights, using advanced algorithms to deliver actionable information on devices like iPads and mobile phones.

Through advanced pattern recognition, the AI identifies key connections and behavioral patterns in the data, providing deep contextual awareness. Additionally, the AI integrates expert knowledge into reliable data to enhance decision accuracy, supported by an intuitive interface and a predictive algorithm. The system continuously adapts based on real-time feedback and business goals, evaluating solution effectiveness and making necessary adjustments to optimize and personalize service delivery.

By integrating these kinds of AI tools in the training and day-to-day operations of field service jobs, the industry can not only address the current labor shortage but also future-proof its workforce. This ensures that services remain responsive to the high and ever-increasing customer expectations.

Additionally, as AI handles more of the technical load, workers are able to shift their focus towards developing soft skills, such as customer service, communication, and teamwork. This shift makes the trades more appealing to a broader range of individuals, including those who may not have previously considered a career in this field due to its technical demands. Emphasizing soft skills enhances job satisfaction and adaptability, making trade careers more dynamic and fulfilling in an ever-evolving work environment.

To learn more about using AI to future-proof the field service industry and attracting young talent, visit the Aquant website here.

Related News:

Salary Guide 2024 by Mondo Released for Attracting and Retaining Talent

RTO-Return to Office Study Found a Quarter of Execs Hoped for Turnover

The post Addressing the Labor Shortage in the Field Service Industry appeared first on Digital IT News.

]]>