5 Ways Service Virtualization Helps Drive Application Quality and Innovation

Alan Baptista, Sr Principal Product Marketing Manager, CA Technologies
318
497
104

Alan Baptista, Sr Principal Product Marketing Manager, CA Technologies

Today, more than ever, you need to deliver software as if your business depends on it, because it does.  A recent survey conducted by Freeform Dynamics found that organizations who are considered Application Economy leaders have a 2.5 times higher revenue growth, twice the profit growth and generate one and half times more business than other laggard counterparts.

Companies need to look for new ways to work in quality innovative applications in and out of the development and testing lifecycle. Doing things the “same old way” no longer will cut it. Becoming a digital first company also requires a shift in culture and focus from leadership. In the new digital world, the focus of IT needs to be on innovation, cost control and quality VSs efficiency. As a result, many companies have started to adopt agile development processes. By continuously delivering new features and releases more quickly, an organization can better meet the demands of consumers and outmaneuver its competitors. 

Challenges in Achieving Continuous Delivery

Many companies struggle to deliver more innovative, higher quality applications, faster and more frequently. In most cases, their application delivery systems and processes were designed to only push out one or two releases a year. So the traditional “software factory” for transforming an idea into a customer experience becomes a chaotic and complex process with countless obstacles.

  ​Service virtualization is the practice of capturing and simulating the behavior, data, and performance characteristics of dependent systems and then creating a virtual copy of those dependent systems  

Different development teams typically work on different interdependent parts of the app, so development teams often sit idle waiting for other components to be completed. Then there is the question of who is responsible for the overall quality of the completed application. 

In most organizations, this responsibility has fallen on the shoulders of the testing team. Manually creating and configuring development and test environments can be expensive and time consuming. Testing is frequently postponed to keep the project moving forward, so testing doesn’t happen until the end of the cycle when errors and defects require significantly more re-work than if they were found earlier in the process.

Time-to-market isn’t the only thing that is impacted by all this inefficiency – when errors or defects aren’t discovered until after an application is deployed to production, customer experience suffers, potentially damaging your brand.

Driving  Innovation Faster with Service Virtualization

Service virtualization is the practice of capturing and simulating the behavior, data, and performance characteristics of dependent systems and then creating a virtual copy of those dependent systems. Those virtual copies, which behave precisely as a live system, can then be used independently of actual, live systems to develop or test software without any waiting. It allows IT teams to deploy a “virtual service” of any dependency to let you code and test without constraint, saving time, money and the inevitable customer complaints—and humiliation—when you release without sufficient testing.

Here are five ways service virtualization helps companies bring applications and/or user experience enhancements to market faster:

1. Enable Parallel Development

Increased agility and faster time requires developers to have the ability to code and test iteratively to produce quality code prior to promotion downstream. Quality code requires tests to not only test code exercised within the test case, but also its integration with other components or systems, typically unavailable. Service virtualization allows developers to virtualize these dependencies. Eliminating this constraint provides developers with a sense of “virtual privacy” critical for maximum productivity. The value is amplified when there are larger development groups operating within multiple work streams that share dependencies to deliver customer features for a single application. 

2. Eliminate Mocking and Stubbing Efforts

There are several mocking and stubbing frameworks in the market with varying protocol support. The majority of them requires writing code and are challenged with supporting performance and load test efforts. Virtual services are essentially a productized version of “mocking” or “stubbing” that can simulate behavior and data of real systems under heavy load. Developers can rapidly build virtual models without writing code. 

3. Reproducing Production Defects 

Defect remediation efforts are known to cause a gridlock between development and testing due to dependencies on over utilized resources and environments. This leads to significant overhead in coordination or re-configuration efforts and long remediation timelines impacting an application team’s ability to address critical defects impacting customer experience and revenue. The ability to create and quickly stand up virtual service environments instead of real environments allows development and testing teams to rapidly simulate production defect scenarios for debugging and subsequent code development. For example, astudy by Voke group, found that 38 percent of participants had cut their defect reproduction time by 50 percent or more by using service virtualization. 

4. Building a True “Release Candidate” 

Code check-ins create new instances of deployment pipelines that are put through a series of tests optimized to execute very quickly. These commit stage tests are typically restricted to unit tests due to the limited availability or unavailability of dependent systems. Simulating these dependent systems allows development teams to append their test suites with additional acceptance and regression tests that provide greater confidence in the overall build. Modeling acceptance and regression tests to represent common failures in later stages allows for cost avoidance involved in detection and remediation of defects downstream.

5. Enable Performance Testing at the Component Level 

Performance problems are typically not detected until very late in the delivery lifecycle, at best, or in production, at worst. While component based testing is an attractive concept, it has been very difficult to actually implement in practice. There are typically two main barriers, firstly the ability to isolate a component and secondly the intelligence around a components interactions. Service Virtualization enables you to isolate the component you want to test, simulate the dependence, adjust response times, vary the data buffers, and do negative testing. 

Deciding whether or not to introduce a truly transformation technology into your organization with existing team, tools and process, is no small feat. Adopting service virtualization requires a genuine appetite for change, as well as evangelists who recognize the significant, measurable and genuine benefits it can deliver.  

Therefore your part becomes more important than ever, that you take the leadership role of a true change agent that must permanently and substantially improve your company’s rate of innovation, now and in the future. Settling for anything less than this complete transformation is not only counterproductive, but defeatist and essentially conceding your lead to the competition.

Read Also

Blending Modern Technology with Traditional Humanistic Interactions

Ben Saitz, Director, Business Systems & Support, Facebook [NASDAQ: FB]