API Lifecycle and Governance in the Enterprise: Build & Deployment Stage Part (3 of 3)

Figure 1: API Implementation for Entrepreneurs
  • Part 1: Plan
  • Part 2: Design
  • Part 3: Build & Deployment
  • Create APIs and build Microservices.
  • Provide and engage with application developers through API portals.
  • Define, publish, and analyze REST and SOAP APIs
  • Order options for development and testing environments, which will be elaborated later in this blog.

Testing Approaches

All APIs being built should be tested in some form. There are a variety of techniques and approaches to testing. These will not be exhaustively covered here. In general, we can model the API testing process like this:

Figure 2: Testing Approaches
  • Hardcoded JSON file
  • Simulated service implementation (typically another SOAP service on IBM WebSphere Suite) There are a variety of benefits to simulating the backend:
  • APIs can be developed on independent lifecycle to the backend
  • When doing problem diagnosis, issues in the API can be isolated from issues in the backend. Ultimately all testing should be owned by the same organization (squad) that owns the development of the API. That would initially likely be the API Developers aligned to a project.

Development / SIT / Test-level

All APIs should undergo testing. Ideally, before APIs are even deployed to the shared SIT (system integration testing) / Test API Connect environment in some companies, they should undergo a basic level of testing on the developer’s machine. The exact approach is left up to the developer, but they should satisfy themselves that the API appears to work. Once the API is deployed to the shared Dev / SIT environment, it should be fully tested using a standard testing tool or method, such as Postman. This enables the test to be saved in source control as well as the API itself, meaning that the test can be repeatedly recreated over time.

UAT

When an API passes all the suitable tests on the development environment, it is time to move it onto the UAT (user acceptance testing) environment. At this point testing should be more formalized. If the API is at this stage connected to a real UAT backend, the same tests that were run in development should be repeated. In addition, tests to validate the additional coverage of the API should be run. In time, it will likely be beneficial to companies to build up a dedicated team for constructing and validating these additional tests.

Production

On production go-live events, when APIs are promoted into production, tests should be rerun. However, since APIs are connected to real production backends, it’s important to make sure that data is not damaged.

Other ways to improve code quality

As well as formal code testing, there are some other ways to improve code quality:

  • Code reviews can provide a way for more senior experienced developers to provide feedback on the APIs built by more junior or novice developers. Typically, these are semi-formalized sessions, where developers review the work of others, ideally with them in the room feeding back and making changes in place.
  • Pair programming, which is a common technique for improving code quality, involves developers working together to build APIs and resolve issues with them. We recommend that companies institute both practices.
  • Test-Driven Development (TDD) is a software development process of writing automated tests to ensure that code works before writing the implementation. First, you write a test, watch it fail (red), write the implementation, watch the test pass (green), and refactor if needed. Repeat the cycle as you build out the system.

Implementation Approach

When implementing APIs using API Connect, there are essentially two choices:

  • Implementing APIs using the developer toolkit/API Manager as APIs which use the API Gateway at runtime only.
  • Implementing APIs (based on LoopBack) which use the API Connect IBM Liberty Collective — in other words, they have some associated Node.JS/LoopBack code. We sometimes refer to these as microservices. Note that this is somewhat different from the word microservices in the architectural sense (meaning small services which can be distinguished from monoliths). Here, we are focusing on word as it is used in an API Connect context — meaning the implementation approach.
  • In general, the API Gateway approach should be adopted if there is no compelling reason requiring the microservice (coding) approach. This is because it requires minimal coding, less infrastructure, and is generally simpler. This may prohibit easy connectivity to non-HTTP types of backend service.
  • For more complex integration, StrongLoop (microservices) can provide integration with databases and other types of backends. Code in StrongLoop can also be used to customize the behavior of APIs more easily. In general, we wouldn’t recommend making this code complex — when trying to integrate with backends StrongLoop doesn’t explicitly support (e.g. AD), an ESB or integration engine (e.g. IBM Integration Bus) is more suitable.
  • When the expected response time of the backend systems exceeds the expected total response time of the API itself (plus a small factor of ~0.5s for the API execution itself). If this is exceeded the backend services will need to be refactored and performance improved. In practice this is likely to limit the number of backends to a maximum of approximately 2–3.
  • When the number of backend orchestrations to be performed are indeterminate (in other words, it’s not known how many invocations will be needed ahead of time). In this case the backend service should be refactored to widen its scope, so a deterministic number can be performed.
  • When any of the backend systems are known to be regularly unreliable (for example, one invocation may fail whilst the others succeed). Note that strictly speaking this does not rule out implementing this as an API but will make the job of the API developer significantly more complex, as they will need to deal with the complexities of error handling in this case and send back a result payload showing partial success. The client may also have to implement partial retry logic.
  • When it is critical that the Atomicity principle is maintained between different service invocations (for example, one invocation debits money from an account, and credits it elsewhere). HTTP — the mechanism typically being used to call these backend services — is inherently unreliable and is not designed to be atomic/transactional. In this case an ESB or BPM solution should be used, or the backend services should be refactored to be atomic.
  • Long-running orchestration (meaning more than ~10s) should always be performed using asynchronous communication mechanisms (meaning queueing or similar) rather than synchronous messaging such as HTTP — the failure modes are far more straightforward. In this case, two APIs should be implemented, one to initiate the orchestration (which should then be managed by an ESB), and another to progress-check/pick up the result. The front-end UI would need to reflect this initiate/progress-check distinction. In general, also, Frameworks that abstract out the API Gateway and don’t allow for modelling the data in the API Gateway (and therefore enforcing its structure according to the OpenAPI Specification OAS — formerly Swagger) should be avoided.
  1. Full Kubernetes deployment — all subsystems (management, analytics, portal, gateway) runs one or more Kubernetes cluster.
  2. Partial Kubernetes deployment — some subsystems (management, analytics, portal) runs in one or more Kubernetes cluster, and the gateway subsystem runs on a DataPower (physical or virtual)
  3. Full VMware deployment — all subsystems (management, analytics, portal, gateway) deployed using VMware OVA files
  4. Cloud Pak for Integration is just one of the option of deploying in Kubernetes (ICP, RHOS) or pure K8s

Catalogs

When you publish an API Product to a catalog, that API Product becomes available on the Developer Portal that is associated with that catalog. An API Product can be published to multiple catalogs. You might have different catalogs for different consumers, such as one catalog for business partners and another for internal usage. You might also use different catalogs for continuous integration; for example, one catalog for preproduction activities, such as quality assurance and test and another catalog for production use.

Security

The terms in this section title are often overloaded in the IT industry and require clarification. In this post, these terms have the following meaning.

Putting it all together

The remaining part of this blog will bring together the required steps to get IBM API Connect to integrate with Openshift on a fully controlled deployment lifecycle by leveraging the IBM Secure Gateway as a secure tunnel between the specific resources to connect on-premises context.

  • Build the code based on the Swagger specification discussed during the designed phase; find at this location
  • Deploy the artifact on a local instance of OpenShift by following this section Deploying to OpenShift
  • Follow the steps from the Managing API Endpoints with IBM API Connect section
  • Finally, follow the steps required to Create a Secure Gateway service with IBM Cloud section below.
Figure 3: Banking Application
  1. Portal: Loads the UI and takes care of user sessions and relies on all other microservices for core functionality.
  2. Authentication: Handles user profile creation, as well as login & logout.
  3. Accounts: Handles creation, management, and retrieval of a user’s banking accounts.
  4. Transactions: Handles creation and retrieval of transactions made against users’ bank accounts.
  5. Bills: Handles creation, payment, and retrieval of bills.
  6. Support: Handles communication with Watson Assistant to enable a support chat feature.
Figure 4: Banking Application Architecture

Deploying to OpenShift

OpenShift is an open-source software; it is a family of containerization software developed by Red Hat acquired by IBM July 9, 2019. Its flagship product is the OpenShift Container Platform — an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. It is trusted by more than 1000 customers to deliver business-critical applications, whether they’re migrating existing workloads to the cloud or building new, cutting-edge experiences for their users.

  • Creating a project and deploy pre-existing application container images.
  • Building application container images from Dockerfile and deploy them.
  • Implementing and extending application image builders.
  • Using incremental and chained builds to accelerate build times.
  • Making an application visible outside the OpenShift cluster to run on IBM Secure Gateway
  • Automating builds by using a webhook to link OpenShift to a Git repository.
Figure 5: OpenShift Container Platform
./deploying_on_openshift.sh
Figure 6: Deploy on OpenShift Container Platform

Managing API Endpoints with IBM API Connect

IBM API Connect is an API management solution from IBM that offers capabilities to create, run, manage, and secure APIs and microservices. By using these capabilities, the full lifecycle of APIs for on-premises and cloud environments can be managed. We will walk you through a step-by-step guide on how to build your APIs coming from the design phase, starting with the API endpoints from the previous post.

Figure 7: Innovate bank API 3.0
  1. Login into IBM Cloud to access the API Connect:
  2. Create a new API:
Figure 8: IBM Secure Gateway screen 1
Figure 9: IBM Secure Gateway screen 2
Figure 10: IBM Secure Gateway screen 3

Create a Secure Gateway service with IBM Cloud

The Secure Gateway Service allows access to backend service running on internal customer network from the cloud it manages the mapping between your local and remote destinations and monitors all of your traffic via a secure tunnel.

Figure 11: Secure Gateway

Conclusion

We hope you have found this series of posts informative. We aimed to provide at least a tiny amount of relevant information to your quest of building a great API. Let’s quickly recap what we learned so far. Part one was about the importance of having a good Governance with “Planning an API Initiative Strategy and Governance Model” and “API Lifecycle” in mind. Part two provided an overview of API Design, their characteristics, business value, lifecycle, and strategy. Part three was to bring this home with an implementation approach. It stressed on the key topics such as Deploying to OpenShift, Managing API Endpoints, Secure Gateway Service, Testing Approaches, Implementation Approach, and Security. This was not mean to be exhaustive but enough to get you to understand the API strategy and its impact. We shared some relevant information in your quest of building a great API as you embark on your Journey to Cloud.

Attribution

Special thank you to Enrique (Ike) Relucio from IBM Garage ASEAN, who shared his knowledge of the banking industry and the API Economy. Thank you to Kok Sing Khong from Integration & Development Lead IBM Cloud and Cognitive Software who shared his expertise on API Connect. Also, thank you to Aldred Benedict from Blockchain Labs Developer, IBM Industry Platform who shared his expertise on OpenShift.

References

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ernese Norelus

Ernese Norelus

Ernese is responsible for providing technical oversight to Cloud client projects!