A showcase of work we are very proud of
One of the things we pride ourselves on is listening to our clients to truly understand their needs. We've worked on a variety of products, here are a few we are particularly proud of.See Our Work
Lifting and Shifting a Complex Legacy System
When our client Internet Brands (IB) acquired Avvo in 2018, they inherited a bespoke datacenter environment whose total cost of ownership (TCO) far exceeded the operating costs expected for other Internet Brands business units. Rapid River needed to guide the transition from the old datacenter into IB's multi-tenant datacenter with minimal downtime or developer impact.
Since Avvo already had a robust test environment management system based in AWS, we decided to start there. Working with the Avvo Platform team, we built a new deployment system based on Kubernetes that would be agnostic to the operating environment (cloud or datacenter) in which the test environments run. Once we proved that all of the applications could run correctly in Kubernetes, we then staged careful migrations of the staging and finally production environments into the Kubernetes-managed solution without changing how the developers or sysadmins interacted with the frontend deployment and orchestration tools. The entire site downtime for the production move was under an hour, and the cost savings of the new operating environment was more than $60,000 per month.
Proven Results / Value Add
All Avvo environments - testing, staging, and production - were migrated with no impact to end users or technical support staff. The annual operational savings neared seven figures, with the added benefit that the entire platform now ran on Docker and Kubernetes.
Modernizing and Simplifying a Web Analytics System
Our client Lawyers.com was saddled with a bespoke legacy web analytics system implemented in more than 200k lines of PL/SQL code. Maintenance of this system required a team of around ten developers and was a primary driver behind annual database licensing costs of almost $10,000,000.
Our initial request from the client was to port the existing system to a free, open-source database such as PostgreSQL. However, upon further inspection we determined that the ingestion and aggregation processes that had been implemented in stored procedures were much better suited to external ETL processes that could be written and tested as independent units, storing their final results in a database for easy access and report generation. Working from the inputs (web analytics events) and outputs (desired reports), we replaced the system between these two ends with a series of small tools that performed each step of the data collection and analysis. We also built an automated quality assurance tool which showed us discrepancies between the output of the system we were building when compared to the existing system. We then covered each of these tools with 100% test coverage to ensure that all of the quirky behaviors required for backwards compatibility continued to function as we refactored. Lastly, we included tools in our implementation which allow for convenient review and fixing of problematic incoming data. We launched our re-implementation without any disruption to the system.
Proven Results / Value Add
Our client was able to toss their expensive Oracle database license, which reduced operating costs significantly. Stats collection and report generation continued without interruption, and iterative development of the system proceeded with staffing of only 1.5 full-time equivalent developer hours.
Automating Software Deployment to Robots
Our client was an early-stage robotics company performing Rapid Application Development of a robotics control platform for warehouse robots. When Rapid River joined the project, deploys were done manually by copying individual software builds onto robots using remote file copy commands from the Unix command-line prompt. This approach to deployment was unreliable and left robots in an unknown state since no versioning or packaging was used to deploy software.
The project required reliable, versioned, over-the-air deployments to robots in their testing facility using methods that were easy for developers to understand and to diagnose in the case of problems. Drawing from our background in traditional datacenter deployments, we started first with understanding how the robotics developers were handling feature branches and code merges. Once we had an understanding of their approach, we provided feedback on workflow improvements that would make CI/CD easier for the entire team to work with. With these workflow changes in place, we worked with the development team to implement reference unit and integration test implementations for their ROS-based software packages. These tests allowed our deployment system to have a decent chance of preventing broken builds from reaching any robots. Having established the branching standards and testing strategy, the remainder of the project involved templatized Jenkins jobs for each of the underlying packages, automating dependency resolution, and pushing versioned packages out to the robots as deployment targets from a master Jenkins server. The last stage included facilities for rolling back deployments that were not successful so that the robot would always remain addressable even in the case of failed deploys. Lastly, we then trained the developers on how to maintain and extend these CI and CD jobs so that future packages and subsystems could easily be added to their platform without reinventing a new workflow.
Proven Results / Value Add
Developers on the client team were able to understand and use the Jenkins jobs right away and make changes as their system evolved. This took away a large headache that had been slowing the rapid development process and feed the developers to move forward on new features and improvements.
If you love writing software that runs on the Internet, we'd love to talk with you.
Contact Info :
• Phone Number: +1 781.974.0366
• Email: email@example.com
• Office Address:
268 Osprey Circle
St Marys, Georgia 31558