Our Services

As a small consultancy, our objective is to help set customers on the right path to success rather than construct long-running engagements. We believe that the most successful changes are those which are learned and incorporated into your organization's DNA.

If you are interested in our consulting services, please get in touch!

Technical Support

As creators of the deltalake Python and Rust packages, we have been supporting Delta Lake applications since the beginning. Buoyant Data offers on-demand technical support for one-time help with your projects.

Infrastructure Optimization

Sometimes you don't have the time or need to re-architect infrastructure already in production. With access to logs from your data platform and details about your users' behaviors we can assess the status quo. This allows you to squeeze faster queries and lower cost out of your current platform without needing to invest in newer technologies.

Rust Development

Our team of Rust developers are already active on a daily basis in many prominent Rust-based data engineering tools. We can help your organization customize, develop, or deploy Rust-based solutions for high-perofrmance and low-cost data services in any cloud or non-cloud environment.

With years of experience creating open source data infrastructure such as delta-rs kafka-delta-ingest and more, we are at the leading edge of the next wave of data tools powered by Rust.

Data Architecture Consulting

We partner with your engineering team to understand your goals with Databricks and Delta Lake to ensure the best cost-to-performance ratio. This requires us to perform some interviews and review internal documentation to better understand data flows. Depending on the scale of existing or legacy infrastructure, we may also ask to provide production usage logs such that we can simulate some alternatives.

Cost Monitoring

Monitoring and alerting for data infrastructure to ensure that the data platform delivers value while staying within the budget. Our monitoring tooling relies upon usage log delivery from Databricks, and a specific IAM user in AWS. Both of them provide us with the necessary data to proactively identify cost anomalies and react.