Technology

Learn more about technology today.

Maximizing Product Success: The Strategic Advantage of Service Partnerships

Maximizing Product Success: The Strategic Advantage of Service Partnerships 1280 720 ELVT Consulting

Core to our growth and ability to deliver world class solutions to our customers has been the partnerships we’ve cultivated with our key product partners. Products such as Pulumi and Maxio have been outstanding long term partners that boost our capabilities, ensure predictability in delivery, and fill in professional services gaps to their offerings at no cost to them. The success of these partnerships led us to want to share some thoughts on why your product team should consider a partnership with Elevate.


Traditional and academic thinking on the topic has often encouraged business leaders to choose a path of either being a product or a service, especially in the early stages of maturity. That’s not to say there aren’t companies that choose to do both and succeed, but it can often be comparable to running two companies at the same time… a task that can be arduous to say the least. They’re separate business models with very different compositions, incentive structures, financials, etc.

But as a Product company that often doesn’t obfuscate the need to have a services component to your offering. You may be close to a deal, but the prospective client doesn’t have the capacity or the skill to onboard your tool. Or maybe they need to integrate your product with one of their internal suite of tools that currently isn’t an out of the box offering of your product. Or in some cases, a prospective client is considering a full digital modernization as part of their onboarding. In all of these cases and more, a professional services offering can get you to these deals and create long term relationships.

Over our years working with clients to roll out partner solutions, we’ve identified some key benefits to working with Elevate as your preferred services provider. We’ve captured them below as things to consider as you confront how your company addresses your services component.

There are a litany of early- and mid-stage companies who failed because they let their development team get sidetracked by the needs of a potential “big” client. They take their best engineers off of the product roadmap to onboard the client, weeks or months go by, and the core of the product suffers because attention has been diverted. In the best case you land the client and are able to recover over time. In the worst case, you lose the client and perhaps your company with it.

This is where Elevate comes in. We can manage that relationship and provide them with white glove service in rolling out your product and configuring it to their needs. You leverage our knowledge of your tool to get to a successful result while maintaining the velocity of your product roadmap. A win-win-win for you, for your new client, and for Elevate.

A successful partnership is one that works both ways. Your team brings us new clients and we as Elevate do the same. At no cost to you, a strong services partnership can drastically expand your sales reach. We’re onboarding new clients all the time and our existing clients are consistently looking for modern solutions to improve their products. We also highly prefer to work with products we know and trust. This allows for accurate estimates and predictable deliveries that lead to happy clients and strong margins. Where appropriate we also get involved with co-marketing activities. We partner on webinars, events, conferences, etc. to promote the partnership in a joint manner.

There is no doubt that the business models of a services company and a product company are different. And with that, expectations on margins are different. Products that achieve large valuations often gain those because of their ability to scale exponentially. Services will boost your top end revenue, but almost always will squeeze your margins. Something that is likely to be unappetizing to your investors. Leveraging a trusted service partnership with Elevate allows you to reap the benefits of services while maintaining your product margins and keeping your investor base happy and seeing the returns they’re looking for.

It’s one thing to successfully land a deal, but it’s a whole other thing to ensure that the client onboards and uses your product successfully. The last thing you want is a mistake by your new client to lead to a failed launch and the loss of a long term customer. Our team learns your product inside out, doing hands on implementation and taking training where available. The simple act of doing multiple rollouts brings pitfalls to the forefront and allows us to avoid them. In short, our team brings consistency to your rollouts and enhances the customer experience.

Customer satisfaction management has become a focal point of being a successful product company, especially in the SaaS world. And for good reason. Especially when your revenue is driven by consumption. Elevate can supercharge your efforts in this regard. We have a long and consistent track record of cultivating long term relationships with our customers. We adopt their vision and maintain correspondence with them jumping in where necessary to ensure that their solutions are leveraging your product correctly and to successful outcomes.

Wrapping Up

It’s natural that a product would need services to accompany it. It’s also reasonable that a product company would want to stick to what it does best. Finding ways to achieve services outcomes while maintaining a product focus can make or break a company. And a great product / services partnership can be the catalyst for exponential growth and long term success. If you find this resonating for your current situation, we’d love to hear from you. Reach out to us and we can dig into the operational aspects of making a partnership successful!

The Who of Hackers

The Who of Hackers 1640 924 ELVT Consulting
These days, the ‘who’ aspect of malware threats often gets downplayed in the fight to protect data and recover from disasters. There is a common misconception that protecting the systems against theoretically universal and known generic vectors will deter all types of bad actors. However, we forget that the context and thought processes behind the attacks really matter here, as they give insights into the motivations, possessed skill levels, capabilities, and funding behind the threats, as well as the comparative dangers posed by the attacker’s specific goals. 

read more

Elevate 5G Partners with DISH to Launch Groundbreaking Automated Testing Platform

Elevate 5G Partners with DISH to Launch Groundbreaking Automated Testing Platform 1200 400 ELVT Consulting

In collaboration with DISH, Elevate 5G introduces a game-changing 5G testing solution, aimed at broadening access to state-of-the-art test tools across various sectors.


Washington, DC, September 26th Elevate 5G, in a strategic partnership with DISH, proudly announces the official commencement of its cloud-based automated testing platform, with Pente kicking off the inaugural testing. This initiative signifies a transformative approach to 5G testing, opening up the market to a diverse range of entities – from startups and enterprises to government agencies – ensuring that all have access to unparalleled testing tools.

“5G and OpenRAN have created an amazing opportunity to leverage our cloud capabilities and unique partnership with DISH’s world-class 5G lab. Through our industry-leading experience with modern cloud technologies, our team is completely disrupting the status quo of 5G testing,” remarks Kevin Schreck, CEO of Elevate.

Jonathan Schwartz, founder and CTO of Pente, echoes this sentiment. “5G SA private cellular networks are set to be game-changers in enterprise transformation, particularly in applications such as AI for Industry 4.0. In collaboration with our partners, Elevate and Dish, and their advanced 5G lab, Pente is committed to ensuring our entire product suite meets the most stringent carrier-grade standards.”

Pente’s focus remains steadfast on ensuring that their enterprise solutions are on par with high-capacity public networks in terms of quality and reliability. The partnership with Dish 5G and the innovative AWS-based testing environment developed by Elevate is a testament to this dedication. This collaboration provides Pente with a unique opportunity to rigorously test their cloud-based 5G SA core and orchestration solutions in an authentic 5G setting, equipped with advanced tools designed to emulate the most complex network scenarios.


About Pente:

Pente delivers cloud-based private cellular network core, orchestration, and management solutions, enabling swift deployment for LTE and 5G private networks. Catering to any industry, Pente allows entities to tap into the power of private cellular networking without the hefty price tag of telecom equipment or specialized expertise. Pente’s non-proprietary solution is compatible with any RAN infrastructure, boasting over 700 APIs that streamline deployment, monitoring, reporting, and management processes, paving the way for future scaling and innovation. Currently, the Pente platform oversees millions of IT-grade SIMs in devices worldwide. For more details, visit pentenetworks.com.

About Elevate 5G:

Elevate 5G, part of the Elevate brand family, specializes in the design, implementation, integration, and maintenance of contemporary cloud technologies for both government and private sector stakeholders. The Elevate 5G vision is to revolutionize the telecom arena, offering the ability to conduct tests on top-tier tools at a mere fraction of the current market price and timeframe.

Media Contacts:

Elevate 5G
Kevin Schreck, CEO
Kevin@elvtgovt.io
202.945.4833

Pente
Claudia Barbiero,VP Marketing
claudia@pentenetworks.com
sales@pentenetworks.com

Instant Language Model Web App on Your Home Desktop

Instant Language Model Web App on Your Home Desktop 1720 1000 ELVT Consulting

By: Travis Harrison


Incredible Large Language Models (LLM) have been released by technology companies including OpenAI ChatGPT, Facebook Llama, and Google Bard. LLMs can be used to generate text, summarize text, question answering, and more, but the focus has largely been focused on their ability to answer questions with surprisingly strong answers.

These models have grown to 100+ billion parameters and are trained on hundreds of gigabytes of text data. This much data, the number of parameters, and advanced modeling architectures make for very convincing AI models. Although, the models do still have their limitations and are marked with warnings on their potential biases and hallucinations.

Get Started

How do we run such large models? Turns out it can be pretty hard because the model parameter counts are so large which makes loading them require dozens of gigabytes of memory. But what if we want to run them on our own machines?

We can use a few tricks and variations of the models in order to run them on a consumer desktop and pop it into a web app for easy use. Here we are going to use the Facebook Llama 7B model which is the smallest variation of the Llama model. In addition, we are going to quantize the weights down to 4 bits and change the batch size to one. The quantization reduces the precision of the weights while still maintaining most of the performance. The reduction in batch size increases the latency and limits the scalability of the model by removing the parallel processing.

Finally we will use the open source Dalai software which will quantize and serve the model in our browser!

Prerequisites

  • Linux Operating System
  • 14 GB+ RAM

Installation

  1. Install packages for the model
sudo apt update

sudo apt upgrade

sudo apt install g++ build-essential python3.10 python3.10-venv

2. Update ~/.bashrc with

alias python=python3

3. Update the current terminal or restart it

source ~/.bashrc

4. Install nvm

curl -o- <https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh> | bash

export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"

[ -s "$NVM_DIR/nvm.sh" ] && \\. "$NVM_DIR/nvm.sh" # This loads nvm

5. Install node with nvm

nvm install 18.15.0

6. Install Dalai and run it

npx dalai llama
npx dalai serve

7. Go to localhost:3000 and start using your very own language model!

There you have it, your own personal language model! There is much more to explore about LLMs and their use in improving the efficiency of workers and integrating them with existing products. Follow us to learn more… like how you can use the new ChatGPT plugins that directly integrate with external knowledge by giving the model access to web browsing, code interpreting, and retrieval of self hosted knowledge bases!

Streamlining Infrastructure Management: The Power of Pulumi

Streamlining Infrastructure Management: The Power of Pulumi 1720 1000 ELVT Consulting

By: Alex English


Here at ELVT, we run many projects in the cloud each with many environments, which requires a great deal of infrastructure. In order to manage this in a sane way, we use automated infrastructure provisioning. Without it, we’d have to manually configure each project in the cloud for our client’s use. Environments for our projects typically involve many components, like Kubernetes clusters, databases, cache servers, monitoring and logging infrastructure, and more. Configuring every project and every environment manually can be an arduous and error-prone process. There are many ways of doing this, such as Terraform, Ansible, Chef, etc. At ELVT, we partner with Pulumi, as well as use it for our own projects.

What is Pulumi?

Pulumi is a platform that manages infrastructure. At its core, Pulumi stores state, and exposes an API that allows us to manipulate that state. When we run pulumi up, it executes our code, compares the new state to the existing state, and manages any changes. For example, our state can be a set of s3 buckets in AWS, a load balancer in azure, a Kubernetes cluster running on bare metal, a database, etc. Here’s a simple typescript example of Pulumi code that configures an S3 bucket in AWS:

In Pulumi, these states are isolated into independent stacks. Each stack is isolated from other stacks, and when we execute our Pulumi program, it runs against a particular stack. Typically, a stack is an environment (staging, production, qa, etc), but as we’ll see later, we can also use them for local development environments.

Why Infrastructure as Code?

Manual configuration is error prone. For example, when setting up an Application Load Balancer in AWS, there are many screens (networking, rules, target groups, etc) that all have to be configured correctly for an ALB to work properly. We’ve set up a great deal of these at ELVT, and while ALBs are extremely powerful and versatile, every part of these has to be set up correctly to work. If a health check is set to the wrong port, EC2 instances will be removed from target groups and we won’t see requests go through. It often takes a few iterations of configuration to make the stars line up. Although it doesn’t take much time, it can cause frustration and lead to problems that might not be apparent until a project is further down the road.

Infrastructure as Code (IaC) removes those roadblocks. Once a block of code is verified to be working as intended, we can use it over and over again with predictable results. IaC is repeatable. With Pulumi, we can run this code again in a given environment without making any changes. In this sense, Pulumi code is idempotent.

Typically in an organization there are one or more test environments. There may be a development environment, a staging environment, and then ultimately a production environment. Say we’ve got a new feature that requires not only code, but a set of s3 buckets hosted behind CloudFront. In development, we can work out the relationships between those moving parts of the infrastructure in our Pulumi code. We can verify that buckets are created, Cloudfront is configured correctly, certificates are generated correctly, and even the dreaded IAM. Because Pulumi is written in the language of our choosing, we can design modules that take configuration parameters (names or prefixes) that separate one environment from another.

Once we’ve successfully worked this out in development, we can simply switch the stack to the next environment and then run pulumi up against that environment. This avoids us having to go through the whole configuration process again, manually copying configuration from one browser window to another. We can then test and make adjustments as needed

Why Pulumi?

Pulumi is one of a few commonly available IaC platforms. However, Pulumi allows us to write code in languages we’re familiar with. In the above example we chose to use TypeScript. This allows developers to work in a language they’re comfortable with, instead of having to learn a complicated proprietary language such as Terraform, which is notorious for its steep learning curve. In our case, TypeScript’s powerful typing mechanism gives us some added benefits in the IDE, checking the types in our code and giving us strong API documentation in the editor.

Additionally, using Pulumi in our native programming language (whichever that may be) allows us to reason about infrastructure on our own terms. We can create components that share configuration and structure that configuration in ways that we’re used to. For example, when we write an application we can create config objects and serialize them as we want. And with Pulumi, we can use that same approach in our infrastructure.

Pulumi at ELVT Consulting

At ELVT, one of our clients is a successful fintech company that deals with large amounts of financial documents. Many case studies with Pulumi deal with successfully managing production and staging environments, and we’ve done that here as well. However, we’d like to go over one way that we’ve used Pulumi successfully to help manage a growing team.

The infrastructure for this client was set up in AWS EKS, the Elastic Kubernetes Service. The main application used about six different Kubernetes pods along with an ingress, storage, and other kubernetes components. Because of cost, it was uneconomical to host twenty or thirty different development environments in the AWS staging cluster, but to cover testing for some functionality, developers had to reproduce the Kubernetes infrastructure either locally or in a shared development server.

This company started small, and as all successful companies do, hit a period of rapid growth in the engineering staff. Pulumi handled production, staging, and other lower environments. However, we needed a way to set up development in either Minikube on someone’s local machine or on a shared development server in AWS.

Here’s where Pulumi really shined for us. The Pulumi Kubernetes Provider works anywhere that kubectl works, either against AWS, a shared bare-metal development server, or minikube on a local machine. We’d onboard a new developer, create a pulumi stack for them on a shared server, and they’d be off with a minimum of working through configuration.

Let's Wrap it Up!

AWS infrastructure can be very complicated, especially for our larger projects. We have to support developer environments, cloud environments, and sometimes bare-metal or hybrid cloud environments. In addition to saving time and sanity, Pulumi makes this much simpler and more portable. 

Interested to hear more or just want to discuss what we can do with your specific needs? You can ping us directly at kevin@elvt.io or go straight to our calendar.

Navigating App Development in an Uncertain Economy

Navigating App Development in an Uncertain Economy 1720 1000 ELVT Consulting

By: Kevin Schreck


There’s no doubt that current economic conditions have created headwinds for nearly every facet of our lives. This becomes even more pronounced when looking at the prospect of large development expenditures that may or may not have immediate returns.

Let’s take a look at how macroeconomic conditions can affect your development and ways in which you can position yourself to make the current market work to your advantage.

Start with a Plan

Whether you’re starting from scratch or looking to add additional features to your software, we highly recommend that you begin your efforts with a well thought out business plan. When creating this plan it’s important to consider the following:

  1. View the plan from multiple lenses. Perspective is vital. And that includes taking into account a view other than your own. Think about your users, investors, partners, etc as you develop your plan
  2. Take into account the macroeconomic factors. Assess market price sensitivity, availability to capital, availability of human capital and other factors that are sure to impact your success.
  3. Understand the why of what you’re attempting to undertake and how current market conditions fit with it. For example, in a market downturn people are very often looking for ways to cut costs. This may be an area you’d wish to address with either an application used by external users or something that addresses efficiencies within your own internal operations.
  4. Remember it’s the act of planning that is most important, not the plan itself. Things are going to change, but it’s important to have things thought out.

Determine a Funding Strategy

Building software comes at a cost and until recently, access to capital for projects has been historically cheap and easy to attain. Low interest rates and novel technology had made the palatability of taking out capital and the ease with which it was acquired as efficient as it has ever been.

But markets are cyclical and as with all things they change.

Recently, interest rates have risen dramatically and with that the access to capital from banks, VC’s, Angel Investors, etc has become more difficult.

If you can bootstrap your efforts this is likely a good time to go that route, however in many cases that just isn’t feasible. Think about what you have at your disposal and leverage your business plan to determine a rough return on investment. If you’re looking to investors, determine what you’re willing to part with in terms of equity and also the expectations you have on your investors’ involvement.

As mentioned earlier, as you seek capital, a well articulated plan will be critical to reducing the risk on your financial backers and securing the best possible deal for your funding.

Consider Phased Development

We all want to go to market with the complete package, but often that’s not really necessary. Other times, we just need an MVP in place to get to a pilot group or have a demoable product for investors.

At Elevate, we guide our clients through a phased approach to development that’s specific to their needs. We take into consideration their key stakeholders, budgets, timelines, and other factors to develop a phased approach that spreads out their expenditure while also focusing on key near term targets and allowing for adjustments as long term targets change.

As you start your technology journey, begin to think about the nuances to your market entry. What are the must-have features for your application’s production release? Can you focus on a subset of features that get you to a revenue generating beta release? Are there high cost features that you can defer out to a later release when you have steady revenue? These questions can help you to plan the most effective timeline for your development.

Choose a Development Partner that Understands You

The right development partner will have experience with all of these elements and work with you to cultivate a situation that fits your needs.

One of the advantages of working with a development partner is the ability to spread your costs over time without the incurring of the costs associated with internal labor while you’re not in a development-focused mode.

Our team at Elevate maximizes your value while expediting your path to revenue and sustained growth. We give you the flexibility to chart the path you need and the nimbleness to scale rapidly when the time to run hits. We’ve been at this a long time. Leverage our expertise to guide you through your challenges.

Interested to hear more or just want to discuss what we can do with your specific needs? You can ping me directly at kevin@elvt.io or go straight to my calendar.

Web App Development Commonly Asked Questions

Web App Development Commonly Asked Questions 2240 1260 ELVT Consulting

By: Kevin Schreck


One of the great things about being in the tech development business is that we get the opportunity to work with a wide array of industries and an even wider array of people. It’s an exciting process when a new client approaches us with an idea that they’ve been shaping for months or even years. While it is exciting, we also recognize that there’s a certain amount of fear that can delay or, at times, completely stop the development process from beginning. As clients try to make sense of technology, we’ve noticed some commonality in the questions they ask. In this blog, we’ll be answering those questions, including:

Can I Learn to Develop an App on my Own?
How Much Will the Project Cost?
Should I Outsource Overseas?
Is a Cheaper Rate Better?

Let’s dive in…

Can I learn to develop an app on my own?

Sure. But to that I’d ask – do you have multiple years to dedicate to the craft of software development? I’m guessing the answer is no. You’ve got a business to run and goals to attain. Trying to build your own application is going to undoubtedly increase your time to market, decrease the stability of your application, and take you away from the things you need to do (raising capital, marketing, road shows, etc.). In fact, it’s not uncommon for our clients to have a background in development and still trust their development to the Elevate team. Having people with decades of experience and hundreds of applications built on your side is only going to [supercharge] your development and allow you to focus on the things that drive your business forward.

How much will the project cost?

The short and honest answer is it depends. If someone quotes you a number in your first conversation, be very suspicious. A whole host of factors come into play when evaluating the scope of an application. The level of customization, number of integrations, number of users, acceptable level of downtime, and many other factors must be considered. Some applications can be built in a matter of days by a single developer. Others may take months-to-years with an expansive team at your disposal. The important thing is to know the order of magnitude that you can afford, identify your near term vs long term targets, and be prepared to prioritize features should everything you want not be feasible in your budget.

Should I outsource overseas?

There are many perceived benefits of offshoring your development and in some instances those benefits are achieved. We recommend that you analyze the complexity of what you’re looking to do, your ability to overcome time zone differences, will it be used by the US government, and probably most importantly, your ability or capacity to oversee an offshore team. It’s a regular occurrence for prospective clients to reach out to us after a failed experience overseas. Oftentimes it has little to do with the capability of the offshore team but rather tied to the friction created by offshoring itself. Be introspective in analyzing offshore options and think not only of your short term objectives but also your long term objectives. Lower rates may seem like an instant win, but the total cost of ownership is more often than not higher in an offshoring arrangement. Which leads us to our next question.

Is a cheaper rate better?

Much like the overseas question, this often comes down to what it is you’re trying to accomplish. If you have very simple needs that require little ongoing effort, then the cheapest option shows a lot of promise. We’re guessing that’s not your case, however, as software is inherently complex and you likely have big goals you’re looking to accomplish. We recommend that in your initial review of development options you remove rates from the equation unless they are wildly out of sync with your budgets. Focus on capability, responsiveness, and ability to understand your vision. The right development team is going to be able to steer you in the right direction and, regardless of rates, reduce your total cost of ownership by selecting a technology stack that fits your individual situation, minimizing rework, and building scalable solutions that meet your short term and long term needs.

We get it. When it comes time to bring your idea to life, the process of launching your development can be very daunting. Just know that this isn’t uncommon and that with the right team in play, you can meet all your goals and more.

Have more questions? Or just want to discuss how this applies to your specific situation? Feel free to reach out to me directly at kevin@elvt.io or go straight to my calendar here.

How to Design a Robust API

How to Design a Robust API 1720 1000 ELVT Consulting

By: Daniel Ors, Gabe Martinez

In this guide, we will walk through how to go from the need for an API, through its design and documentation, and to its subsequent implementation. This is an entry in our API Design and Documentation Series. If you haven’t read our Attributes of a Quality API installment, I recommend you start here.

Designing and implementing an API can be a daunting task. Our goal is to provide you with a comprehensive guide on how to approach designing an effective API and its documentation. To do so, we will break the process down into 4 steps. The first step in your API’s design and development is to precisely define what is needed out of your API.

Research

Your API needs to provide the right functionality for its consumers which means you need to rigorously define what that functionality should be. Accurately defining the purpose and scope of your API will provide you with crucial guidance on it’s design and implementation. The purpose of your API should be determined by considering the problem that you are trying to provide a solution for. To elaborate, you should consider who the API will be used by (who needs the solution) and what they want to do with your API. Ensure that you have acquired the necessary domain knowledge for the problem space and communicate with the parties who will utilize your design as well as the parties who are affected by your design. These are the consumers of your API and thus, they will be able to assist you in collection and clarification on the use cases and requirements of your API. This communication is key to gain perspective on what your design should offer. We also strongly recommend that you research existing solutions and learn from their strengths and weaknesses. Rigorously collecting and detailing this information will give you what you need to design your API in a way which will be both comprehensive and effective.

Consolidate

Now that you have determined the purpose of your API and its requirements you can start to transform this list into your design specification. To begin, filter and refine the list of requirements and use cases that you want to cater to. Incorporate initial features and potential future enhancements into your considerations. Detail the overarching workflows and usages. Outline the expected behaviors and business logic of the API. Understand and document potential dependencies and interactions, both internal and external. From here, Define models, their relationships, and how the API will interact with them. Important: Document this information for future reference. Utilize tools available to you. Entity Relationship diagrams, flow charts, and other visual aids are invaluable.

With this information in hand you will find that your specification starts to construct itself. Start to document your design and specifications with best practices in mind. Check out our API Design Best Practices guide for a refresher. Once you have a specification detailed refer back to the information you have organized and adjust if necessary. Important: Your design will change over time as you document and implement the API. Use your knowledge base and an Agile Methodology to keep it flexible enough to cater to changing requirements.

Document

Some may wonder why the documentation comes before the implementation. But this is a crucial step. Documenting your API specification will more often than not bring your attention to design modifications that can or should be made. Furthermore, it will help guide your implementation to be effective and efficient. Once again, tools like API Description Languages (OpenAPI/Swagger, RAML, etc.) or API development platforms like Postman, can be quite beneficial. Not only will they increase the efficiency of your documentation process but they can help you identify these modifications. There are many tools that will help you build your API documentation, and they include a suite of useful capabilities. Some of these capabilities include, helping you publish your documentation, providing consumers with test beds, building automated testing cases, or even generating code as a baseline for your implementation. Determining which tool to use varies but we highly recommend you research how they can benefit your documentation process.

Well-Defined, Discrete Endpoints

Many APIs allow significant overlap in their data and endpoints, which, depending on the subject matter, may be appropriate. However, distinguishing between data areas has value in charting out your API. Discrete endpoints will prevent developers from getting bogged down in individual features, digging down into infinite JSON soup for applicable data for their use case. This improves usability significantly for your audience.

It is important to design and define these endpoints so that developers will have clear expectations for access and delivery of data. For example, if you have an API that routinely updates current information about a company in its own endpoint, and has a separate data endpoint for employees, it is better to only allow end users to request employee data from the top level employee endpoint (over which you will have more fine-tuned throttling control), rather than having unrestricted access to a separate api/company/employees endpoint. By restricting access to specific endpoints and resources, it will prevent misuse of your dataset and API, and improve the cost of hosting and maintaining it. This can be achieved by including clear rate limits for each endpoint in your documentation along with your authentication protocol for accessing the API.

Implement

Finally, you now have your API specification thoroughly defined and well thought out. It’s time for the implementation. Of course, the details of your implementation are specific to your API. However, there are some common best practices when it comes to implementing your API. Consider implementing Contract Testing. Throughout your development, contract testing will help you catch and handle any inconsistencies between your implementation and your design. APIs will almost always change over time. Keeping your design patterns consistent as your API evolves is vital for the health of your API specification and corresponding documentation. Similar to a code style guide, you can create a Design Style Guide to protect the longevity of your API. In addition to consistency, a style guide will also ensure future design decisions and development are unambiguous and smooth. Refer back to our API Design Best Practices guide for more examples of what kinds of things you can include in a design style guide.

Final Thoughts

Defining an API specification is no simple task. It is for this reason that designing one will surely not be concluded once you have published your API for its initial consumers. Much like all software development, it is an iterative, Agile process, and your API will be refined and augmented over time. Maintaining your API is the next part of the design journey. Be open to feedback from your consumers and refer back to the process and design standards you set out for your organization.

This series will be continued in next week’s installment, API Documentation.

API Design – Attributes of a Quality API

API Design – Attributes of a Quality API 1920 1080 ELVT Consulting

By: Daniel Ors, Gabe Martinez

Series Introduction

An essential aspect of our massive software world is collaboration. Everything from open source software communities, product integrations, and even microservice communications require collaboration to be successful. Software, and its continued development, rely on the collaborative efforts of those generating it to truly scale into large user adoption. When it comes to communication about software and its development, effective APIs (Application Programming Interfaces), and their corresponding documentation, are crucial to the development of a successful application. In this series we will cover the core attributes of a quality API as well as how to construct effective APIs that support efficient adoption both internally and by external 3rd-parties.

An API is a defined method of interacting with a specific software application describing standardized request formats and what to expect in return for each request type. Since the format for requests is frequently strictly defined, it makes it essential that the documentation is clear, unambiguous, and accurately updated over time. This allows developers to work with unfamiliar systems in a standardized way with zero to minimal involvement from the creator of the API itself. If you’re interested in reading more about the basics of APIs in plain English, FreeCodeCamp has an excellent blog post on the subject.

Over the coming weeks we’ll be taking a deep dive into the key components of designing a successful API — from how specific API attributes make them compelling and easy to work with, to creating your own starting point for an API, as well as an all comers’ guide to how to use an API. We will begin with an examination of the Attributes of a Quality API, then the ins and outs of API Design, and conclude with a deep dive into how to Document your API to maximize its potential and usage.

Attributes of a Quality API

Quality APIs are identified by several notable attributes:

  • Clear Purpose
  • Strong Documentation that is Easy to Understand
  • Well-defined and Discrete Endpoints
  • Rich Data that Presents Significant Value to the Developer and End User
  • Potential for Extensibility in the Open Source Community
  • Conform to an Established Conventional API Architectural Style (e.g. REST, GraphQL, RPC, Falcor, etc.)
    • How these concepts help craft more maintainable APIs
  • Strong Community Supporting its Development via Active Repository Management
  • Standards for Maintaining
  • Graceful Error Handling
  • Solid Security Practices

An API does not require all of these attributes to possess quality or potential, but the very best APIs all adhere to these principles when building out their functionality. If you are getting started with your API, these are clear goals to strive for in getting it off the ground.

Clear Purpose

Good APIs will have a clear mission statement that outlines the goals and objectives of its functionality. Without standards for maintaining and a strong community understanding of the API’s purpose, its long-term dependability will be very low, as it will appear that the core maintainers do not have strong investment in the success of their API. In addition, the intended audience for the API should only influence who it is made available to — all standards for quality mentioned here will apply to any of the most common types of APIs — private, public, or commercial (also known as partner APIs).

Strong Documentation

Documentation of functionality that is easy to understand is critical to the success of any API. It is essential that the documentation is also not overly succinct in providing understandable instructions, but rather balancing detail with clarity. Without documentation, an API will be nearly impossible to access, and it will be difficult to parse what data, endpoints, or feature frameworks are available to use. This is a frequent issue with closed APIs, where the core users are a limited group of developers that hold all the keys to the castle in terms of knowledge. In this scenario, when new members of the team are introduced to the API, it is unlikely that quality contributions will be produced by the new developer unless the API is well-documented. Person-to-person knowledge transfer is a poor substitute for clear documentation, as clear documentation will always allow for a more transparent and complete picture to be communicated and provides a persistent resource available for reference.

Well-Defined, Discrete Endpoints

Many APIs allow significant overlap in their data and endpoints, which, depending on the subject matter, may be appropriate. However, distinguishing between data areas has value in charting out your API. Discrete endpoints will prevent developers from getting bogged down in individual features, digging down into infinite JSON soup for applicable data for their use case. This improves usability significantly for your audience.

It is important to design and define these endpoints so that developers will have clear expectations for access and delivery of data. For example, if you have an API that routinely updates current information about a company in its own endpoint, and has a separate data endpoint for employees, it is better to only allow end users to request employee data from the top level employee endpoint (over which you will have more fine-tuned throttling control), rather than having unrestricted access to a separate api/company/employees endpoint. By restricting access to specific endpoints and resources, it will prevent misuse of your dataset and API, and improve the cost of hosting and maintaining it. This can be achieved by including clear rate limits for each endpoint in your documentation along with your authentication protocol for accessing the API.

Rich Data that Presents Significant Value

Without data that presents value to the developer and the user, an API will surely wallow in obscurity — whether it is being built as a personal project or for an organizational purpose. If there is a single use case for the API you are thinking about, that clearly does not exist yet, it is far more likely that a parallel or adjacent API will be available to support development of your idea. Extending that API’s functionality to include yours may also present a more significant value-add to your own resume or organization’s reputation in the marketplace.

Potential for Extensibility in the Open Source Community

Without future potential for innovation, APIs will likely become largely stagnant and more focused on issue handling rather than providing room for growth. It is important to keep an eye on the horizon for the roadmap you envision for your API, or what the community suggests would represent quality additions to the feature set. Your stakeholders — whether they are private clients or the open source community — will have a vested interest in contributing to your API’s improvements and future viability.

Conform to an Established Conventional API Architectural Style

There are several standard API architectural patterns that have emerged over years, from REST, SOAP, GraphQL, to RPC. While it is important which format you pick for modeling your API on, it is also critical that you make it clear in your documentation that it follows that convention. This will aid developers significantly in picking up your API and understanding its design, expectations, and, of course, quirks!

When you invest in following a particular API architectural pattern, it also improves the ability of your own engineers, as well as any potential open source developers, to maintain and extend the functionality of your application. This presents a very strong value-add for your organization’s product and its viability as a solution in the long-term.

Using accepted conventions of API style will also aid in its long-term viability and maintainability. In conforming to a pattern, a wider range of developers and engineers will quickly be able to jump in, identify solutions, and become a key contributor. It will aid in developer retention and overall productivity — the easier and more rewarding you make it for individuals to contribute and be an active member of the community, whether open or closed source, the more likely you are to reap exponential rewards.

Strong Community Supporting Development in Github/GitLab

Hosting and managing your API via active repository management and diligent issue documentation and management is crucial. This practice allows active developers, internal and external, to contribute to open issues and gain increased familiarity with your technology and expectations. This also allows you to better manage releases of your API, protecting your stable code branch while enabling feature enhancements and extensions of functionality to be tested in less stable branches.

Using a heavily trafficked versioning tool such as Github or GitLab will also encourage growth opportunities as they increasingly move to a more sophisticated social format for hosting code repositories, particularly open source repositories.

Standards for Maintaining

Setting baseline standards for your API is key to ensuring quality maintenance and fulfillment of the product roadmap. It will also make it easier for developers to contribute, as clear expectations will be present for issue resolution as well as feature requests.

Your standards should include several key points — a style guide for code, expectations for contributing, resources required, and rules for communication along with key maintainers.

A style guide will make it clear which specific conventions should be followed in submitting contributions to the codebase. This should include basics such as standard naming conventions, giving priority to commenting new code with sufficient detail, and best practices for specific syntax within the majority programming language being used in your API. Airbnb’s style guide for JavaScript is an excellent example of a strong syntaxical standard.

Your section on expectations for contributing should demonstrate what a proper contribution process will look like — how to identify a suitable open issue to address, whether tests are required, and general timeframe for when to expect approvals or responses from key maintainers. (ex: “Our team spends about 10 hours per week on this project, average response time is 5-7 days.”)

A section covering the resource requirements to get started maintaining will also encourage developers to invest their time — as it will lower the bar to contributing significantly. By documenting your processes and what tools are used by the core team, it will be easier for new contributors to quickly get comfortable with the code and complete their submissions. This is an opportunity to document any existing known issues for developers in working with the project’s standard resources, allowing others to suggest new solutions from their own experience.

Rules for communication will provide clear guidelines for what should be communicated in the contribution process — from reasonable pull requests to what is allowed in the slack/gitter channel for the project. One key requirement that is common to most shared projects is that all decisions, support requests, or feature requests should be submitted in public channels, so that transparent communication is present to all involved. These standards will streamline moderation and management of project channels. Along with these guidelines, a list of key maintainers and project managers should be included. This will add points of contact to relieve pain points if the standard contribution process does not proceed as expected. 

Homebrew’s open source repository follows these principles and provides excellent examples with their Code of Conduct and Contributing Guide.

Graceful Error Handling

As your primary audience and user base will be developers, it is important that you provide for detailed error handling along with graceful exiting. It will be a far more dev-friendly experience to end the terminally errored process when it reaches the error state — using conventions such as try/catch blocks to end the connection or request/response operation. It is also key to use specific error handling messages that are informative, concise, and provide the correct level of detail to enable successful troubleshooting.

Solid Security Practices

In order to protect developers, users, and yourself from exposed and non-encrypted connections where API keys and other secrets could be easily transcribed, you should make it a requirement to use security practices such as SSL-only connections and using secure authentication methods such as OAuth to verify access.

Wrapping Up

If you’ve followed these key steps in building your API, congratulations! You’ve very likely created a strong application that will encourage future development, extension, and user growth. The most critical next step to take is to continue to invest time in cultivating your API’s contributions and release roadmap. Focusing on maintaining the release cycle will ensure continually increased quality of service for both developers and end users.

This series will be continued in next week’s installment, How to Design a Robust API.

By: Daniel Ors, Gabe Martinez