top of page

With an SDET Course, your team's testing procedures will be improved

As you might anticipate, Syntax Technologies is a leader in the application of observability to our systems' operational components. The Unified Data Streams (UDS) team is quite proud of its techniques for instrumentation and alerting. Maintaining a dependable, highly accessible data platform is a top priority for us. Our reliability procedures, however, must also guarantee that our platform consistently delivers correct results in addition to being highly available. Naturally, inaccurate results will only reduce customer confidence in the services we offer. If it's wrong, we might as well stop collecting data altogether.

The topic of incorporating reliability into our data platform is broad, particularly when discussing data streams. But in this article, I'll describe a crucial move we did to raise the dependability of our testing procedures.


Identifying areas for testing improvement

In a word, the UDS team is in charge of gathering and filtering all incoming Syntax client data so that it may be given in an amount that is manageable to the teams in charge of the many products on our platform. When it comes to testing, our approach for white box testing (in which the tester is fully aware of the design and implementation of the code) calls for a significant amount of unit test coverage for all code created by the team. All of our code is tested using JaCoCo and JUnit, and these tests are run after each build. Test coverage is commonly set to 99 percent for builds to pass.

Our black box testing technique, in which the tester is blind to the planning and execution of the code, hasn't proven to be as effective as white box testing. It became obvious that we needed to boost confidence in our reliability standards by finding issues in our platform before our customers could, as we started to grow more quickly and extend our services.

We added a software development engineer in test (SDET) to the UDS team to fill this gap. SDET Bootcamp possess a unique combination of strong engineering abilities and a zeal for locating software problems. In addition to creating, developing, and carrying out tests, they play a crucial role in the development of the software that will ultimately be tested. However, it would be up to the UDS SDET to create a functional test suite that would evaluate the overall accuracy of our systems.


For our embedded SDET, a mission

My aim in integrating SDET within the team was straightforward: On the UDS team, a committed test engineer who is less familiar with the inner workings of our service would bring a fresh perspective to testing that is more in line with how our customers perceive how things operate. For instance, a client might look at an API specification and guess what it performs based on the API's name, its parameters, and their own prior usage of other APIs. On the other side, our developers make assumptions about how well our clients will comprehend that API. The responsibility of an SDET is to test for and find assumptions made by the developer.


As a result, the UDS SDET's objectives would be twofold:

· In the public cloud, set up an integration test environment and perform tests on our streaming services platform. We wouldn't be limited by our data centres' capacity or require assistance from other New Relic teams if we used the public cloud.

· Create a functional test framework with a broad range of tests to verify the accuracy of our clients' data. In order for the tests to detect and inform us when unexpected results are found, the SDET would prepare test data to mimic various accounts. Functional tests are created to test certain service components to make sure they work as intended. Our functional tests would have to cover the following areas of our code for the UDS team:

· Internal APIs's CRUD (create, read, update, and delete) actions

· throttle accounts that exceed data limitations as necessary

· Manage late and early arrival event data in accordance with established business criteria.

· Make that the GraphQL integration API endpoints are operational.


Realizing the advantages of the SDET

The UDS team has benefited in a variety of ways from working with the SDET and the functional test framework. In particular, we've:

· Decreased the size and duration of the game days at the end of our feature development sprints, allowing us to offer features to clients more quickly without sacrificing quality.

· Enhanced coverage in areas where it was previously absent or incomplete, which has increased our confidence in the software we ship.

· Utilized short functional tests to reproduce issues to speed up the troubleshooting process for issues brought up by customer assistance.

· Fixed issues that were not previously noticed, such as finding and updating outdated software in our service that might have had an impact on customers.

· our deployment procedure was enhanced, putting it much closer to genuine CI/CD

· Isolated complexity by enabling the team to gather the appropriate information at the appropriate time for feature testing when it mattered most.

To track the entire set of functional tests and their most recent results, our SDET Course designed a Syntax Technologies One dashboard, allowing the team to celebrate by popping the champagne:

The culmination of the SDET's efforts is "Sea Dragon," a sophisticated functional test framework that now plays a crucial part in the team's build and deploy procedure.

A test bot automatically runs the relevant unit tests whenever a member of the UDS team initiates a deployment in our deploy pipeline, known as Grand Central. In the event that the deployment goes well, the bot then starts a series of smoke tests:

Last but not least, a companion bot offers a useful Slack integration to keep the team informed about deployments:

I'm in charge of ensuring that my team provides new features for Syntax Technologies' platform while upholding operational quality even as the platform and business continue to expand quickly. Our team's build process and velocity have significantly improved thanks to the hiring of an SDET and the development of a functional test suite. The bug count has also decreased, and we are now more confident that our software is dependable and providing the benefits we offer to its customers. Future reliability development will continue to focus heavily on running a functional test framework.

Commentaires


bottom of page