Magazine Button
How will adopting an Edge Computing strategy benefit organizations?

How will adopting an Edge Computing strategy benefit organizations?

Editor’s ChoiceTop Stories

Scale Computing, a market leader in Edge Computing, virtualization and hyperconverged solutions, has announced a collaboration with IBM that will help organizations adopt an Edge Computing strategy designed to enable them to move data and applications seamlessly across hybrid cloud environments, from private data centers to the Edge. Scale Computing is delivering on-premises Edge Computing solutions for applications that are designed to be flexible and intelligent to help address high-availability and resilience.

A recent IBM Institute for Business Value report, Why Organizations Are Betting On Edge Computing: Insights From The Edge, revealed that 91% of the 1,500 executives surveyed indicated that their organizations plan to implement Edge Computing strategies within five years. IBM Edge Application Manager, an autonomous management solution that runs on Red Hat OpenShift, enables the secured deployment, continuous operations and remote management of AI, analytics and IoT enterprise workloads to deliver real-time analysis and insights at scale.

“We see that the periphery of storage and compute has undergone transformational changes on so many fronts over the last 18 months – from employee and customer health and safety, supply chain challenges, to shifting product demands, to being an increasing target of cyberattacks,” said Jeff Ready, CEO and Co-founder of Scale Computing. “We believe that Edge Computing is critical for the future of many organizations, and organizations of all sizes across all industries can take steps now to help simplify the deployment and management of localized compute infrastructure in a secure and resilient manner.

“We are excited to team our HC3 Edge Computing solutions with IBM Edge Application Manager in an effort to help organizations optimize their operations across the globe with infrastructure designed to be self-healing and automated, as well as added containerized application management that can help them grow into the new reality of Edge Computing.”

Scale Computing HC3 Edge Computing solutions are designed to provide customers with an autonomous infrastructure that can run modern containerized applications alongside legacy applications as virtual machines. This can help users centrally monitor and manage their fleet of distributed infrastructure and applications through the entire life cycle – from deployment and maintenance updates to service level monitoring and problem remediation.

“We look forward to collaborating with Scale Computing to help clients deploy, operate and manage thousands of endpoints throughout their operations with IBM Edge Application Manager,” said Evarisitus Mainsah, GM, IBM Hybrid Cloud and Edge Ecosystem. “Together, we can help enterprises accelerate their Digital Transformation by acting on insights closer to where their data is being created, at the Edge.”

Justin Hurst, Field CTO at Nutanix APJ

If the last 18 months has taught us anything, it’s that gone are the days where an organization could expect the bulk of its employees to work from a central office.
With the world of work now increasingly fragmented and the workforce dispersed across cities, states and time zones, an Edge Computing strategy built upon a hybrid multi-cloud architecture will be fundamental to business success.
Not only are employees increasingly working from anywhere, the on-going proliferation of IoT devices means that Edge Computing itself has evolved. Whereas in years gone by Edge Computing was primarily concerned with data collection and application delivery, it is now also ground-zero for data processing – an evolution that requires significantly higher performance and scalability.
At its core, the aim of any Edge Computing strategy is to reduce latency in situations where instantaneous application response times can improve employee productivity, safety and business performance.
Today’s enterprises succeed or fail based on the insights they derive from their data and the pace with which they can uncover them. As such, Edge Computing is any organization’s ‘ace in the hole’.
With IoT devices now playing a central role in industries and workplaces as diverse as oil rigs, agricultural fields, manufacturing plants, transportation and healthcare, the data these devices generate have the most value at the location it is generated – at the Edge.
If these datasets had to first be sent back to a central data center before they were cleaned, analyzed and evaluated, those insights would be obsolete – in a best-case scenario – by the time they made it back to the Edge. In a worst-case scenario, a catastrophic failure could have been avoided if those insights were immediately available where and when they were needed most.
While business critical applications and devices at the Edge provide real-time responses thanks to their location, the data they’re working with is part of a larger application ecosystem. The same data used to make decisions at the Edge can also improve business intelligence, AI models and much more.
According to Gartner [gartner.com]: “Organizations now realize that a single centralized public cloud is not a panacea. Organizations will need multiple clouds to support specific functions and distributed clouds for edge processing with hyperscaler functions.”
The ability for edge applications and workloads to run in the best fitting cloud environment according to variables such as cost, security and performance can unlock significant economic and competitive advantage.
To truly realize these benefits, however, organizations need the ability to dynamically shift workloads between these multiple clouds – whether private, public, or distributed – as requirements change.
The last, but certainly not least, element in an Edge Computing strategy is enabling the centrally-located IT team to support the environment’s furthest boundaries. Despite the Edge’s increasing importance, having dedicated IT resources at every site is unfeasible. As such, the ability for the organization’s most highly-skilled technical staff to support edge deployments from a central location ensures the on-going success of any Edge Computing strategy.

Stephen Gillies, APAC Technology Evangelist at Fastly

The average consumer expects speedy online experiences, so when milliseconds matter, the advantage of processing at the Edge is that it is an ideal way to handle highly dynamic and time-sensitive data quickly.

But until recently, many developers were only curious about the power of Edge cloud platforms, maintaining a healthy hesitation about building applications and migrating complex logic there.

Fastly, reliable and secure

Developers are at the heart of building fantastic online experiences, and Fastly was built to enable and empower those developers to write and deploy code at the Edge. We did this by making the platform extremely accessible, self-service and API-first. Developers have unprecedented real-time control and visibility that removes traditional barriers to innovation.

Access to logs in near-real-time enables quick debugging during the dev-push-validate loop. The result is a critical visibility boost to edge compute applications and the support of accelerated development cycles.

A cloud too far?

Enterprises globally have discovered running modern applications from a central cloud can pose challenges related to latency, ability to pre-scale and cost-efficiency. Software development life cycles (SDLC) have been corralled to suit cloud provider processes, which in turn impacts speed to market and change control.

To be clear, the Edge should not replace the central cloud completely – there are still important computing jobs to be done there. As a general rule, any app that requires persistent state is not a candidate for Compute@Edge. This includes big persistent data stores, legacy applications which haven’t been converted to microservices architectures and Machine Learning training.

The JavaScript support you demanded without cold starts or increased security risks

Upon launch Fastly Compute@Edge supported RUST. With so many developers accustomed to using JavaScript, we knew this presented a barrier to entry for Compute@Edge. Until now.

To bring JavaScript to Compute@Edge, we started by homing in on security. Isolation technology creates and destroys a sandbox for each request flowing through our platform in microseconds. This technology holistically minimizes attack surface area while maintaining the ability to scale and perform, and keeps your code completely isolated from other requests flowing through the platform.

Unlocking potential

Web application architecture has evolved substantially over time, but today too many organizations still adopt the strategy of running code on central, directly controlled platforms. Now, it’s time to rethink our legacy design principles and maximize value from Edge application architectures.

But no enterprise can pivot to new technologies on a whim. When assessing legacy architecture migration options it is important to consider developer friction, which may include skill gaps and significant changes to SDLC processes. By utilizing existing team expertise and implementing a strategy which supports, and even celebrates, well understood development strategies organizations can take advantage of the benefits edge compute brings.

Innovators and early adopters from a variety of industries have harnessed the Edge’s computing power, cost benefits and security features – offering up incredible customer experiences as signs of success. One fact is becoming clear: for a more performant and secure web, building at the Edge has significant benefits.

Click below to share this article

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive