API Lifecycle and Governance in the Enterprise: Build & Deployment Stage Part (3 of 3)
The objective of any API is to meet the business’s goals in mind. Complicated as it may sound, building an API is easy! Where the challenge lies is in developing the right API that does precisely what the business need by following the principle of good design where APIs need to be consistent to be easily consumable. APIs can unlock data, increase agility, feed innovation, and speed time-to-market.
This is the last in a series of three posts on API Lifecycle and Governance in the Enterprise.
This final post provides an overview of a digital bank composed of a set of microservices that communicate with each other. Using Nodejs, Express, MongoDB, and deployed on IBM API Connect and OpenShift on-premise. The scenario shows how IBM API Connect provides the foundation for building, securing, and consuming APIs to support a digital ecosystem. It also leverages all the topics covered from the previous two posts by demonstrating the strength and the agility to create the new (SOE) systems of engagement applications that are based on Microservices. The gateway provides accessibility in a secure and controlled way for the API gateway to address the challenge of exposing critical business assets, while connecting with backend (SOR) system of records and third-party services, and serve omni-channel consumers.
API Connect provides a complete set of solution that offers critical features that include the ability to:
- Create APIs and build Microservices.
- Provide and engage with application developers through API portals.
- Define, publish, and analyze REST and SOAP APIs
- Order options for development and testing environments, which will be elaborated later in this blog.
Testing Approaches
All APIs being built should be tested in some form. There are a variety of techniques and approaches to testing. These will not be exhaustively covered here. In general, we can model the API testing process like this:
APIs can be tested using a variety of tools:
- (Very basic APIs) curl or wget
- Postman
- JMeter
- If installed, the API Connect Developer Portal
- … (an almost infinite set of other tools)
Also, in some cases (particularly during the earlier phases of testing) it can be useful to simulate the backend, which in most companies situation is a SOAP-based web service. There are a variety of ways of simulating that backend:
- Hardcoded JSON file
- Simulated service implementation (typically another SOAP service on IBM WebSphere Suite) There are a variety of benefits to simulating the backend:
- APIs can be developed on independent lifecycle to the backend
- When doing problem diagnosis, issues in the API can be isolated from issues in the backend. Ultimately all testing should be owned by the same organization (squad) that owns the development of the API. That would initially likely be the API Developers aligned to a project.
Development / SIT / Test-level
All APIs should undergo testing. Ideally, before APIs are even deployed to the shared SIT (system integration testing) / Test API Connect environment in some companies, they should undergo a basic level of testing on the developer’s machine. The exact approach is left up to the developer, but they should satisfy themselves that the API appears to work. Once the API is deployed to the shared Dev / SIT environment, it should be fully tested using a standard testing tool or method, such as Postman. This enables the test to be saved in source control as well as the API itself, meaning that the test can be repeatedly recreated over time.
UAT
When an API passes all the suitable tests on the development environment, it is time to move it onto the UAT (user acceptance testing) environment. At this point testing should be more formalized. If the API is at this stage connected to a real UAT backend, the same tests that were run in development should be repeated. In addition, tests to validate the additional coverage of the API should be run. In time, it will likely be beneficial to companies to build up a dedicated team for constructing and validating these additional tests.
Production
On production go-live events, when APIs are promoted into production, tests should be rerun. However, since APIs are connected to real production backends, it’s important to make sure that data is not damaged.
Other ways to improve code quality
As well as formal code testing, there are some other ways to improve code quality:
- Code reviews can provide a way for more senior experienced developers to provide feedback on the APIs built by more junior or novice developers. Typically, these are semi-formalized sessions, where developers review the work of others, ideally with them in the room feeding back and making changes in place.
- Pair programming, which is a common technique for improving code quality, involves developers working together to build APIs and resolve issues with them. We recommend that companies institute both practices.
- Test-Driven Development (TDD) is a software development process of writing automated tests to ensure that code works before writing the implementation. First, you write a test, watch it fail (red), write the implementation, watch the test pass (green), and refactor if needed. Repeat the cycle as you build out the system.
Implementation Approach
When implementing APIs using API Connect, there are essentially two choices:
- Implementing APIs using the developer toolkit/API Manager as APIs which use the API Gateway at runtime only.
- Implementing APIs (based on LoopBack) which use the API Connect IBM Liberty Collective — in other words, they have some associated Node.JS/LoopBack code. We sometimes refer to these as microservices. Note that this is somewhat different from the word microservices in the architectural sense (meaning small services which can be distinguished from monoliths). Here, we are focusing on word as it is used in an API Connect context — meaning the implementation approach.
The following guidelines should be used to choose the implementation approach:
- In general, the API Gateway approach should be adopted if there is no compelling reason requiring the microservice (coding) approach. This is because it requires minimal coding, less infrastructure, and is generally simpler. This may prohibit easy connectivity to non-HTTP types of backend service.
- For more complex integration, StrongLoop (microservices) can provide integration with databases and other types of backends. Code in StrongLoop can also be used to customize the behavior of APIs more easily. In general, we wouldn’t recommend making this code complex — when trying to integrate with backends StrongLoop doesn’t explicitly support (e.g. AD), an ESB or integration engine (e.g. IBM Integration Bus) is more suitable.
Another question that should be considered is the number of backend integrations that an API requires. APIs should not be used to orchestrate or co-ordinate significant numbers of backend service invocations. If any of the following guidelines are met, orchestrations should not be implemented in API Connect:
- When the expected response time of the backend systems exceeds the expected total response time of the API itself (plus a small factor of ~0.5s for the API execution itself). If this is exceeded the backend services will need to be refactored and performance improved. In practice this is likely to limit the number of backends to a maximum of approximately 2–3.
- When the number of backend orchestrations to be performed are indeterminate (in other words, it’s not known how many invocations will be needed ahead of time). In this case the backend service should be refactored to widen its scope, so a deterministic number can be performed.
- When any of the backend systems are known to be regularly unreliable (for example, one invocation may fail whilst the others succeed). Note that strictly speaking this does not rule out implementing this as an API but will make the job of the API developer significantly more complex, as they will need to deal with the complexities of error handling in this case and send back a result payload showing partial success. The client may also have to implement partial retry logic.
- When it is critical that the Atomicity principle is maintained between different service invocations (for example, one invocation debits money from an account, and credits it elsewhere). HTTP — the mechanism typically being used to call these backend services — is inherently unreliable and is not designed to be atomic/transactional. In this case an ESB or BPM solution should be used, or the backend services should be refactored to be atomic.
- Long-running orchestration (meaning more than ~10s) should always be performed using asynchronous communication mechanisms (meaning queueing or similar) rather than synchronous messaging such as HTTP — the failure modes are far more straightforward. In this case, two APIs should be implemented, one to initiate the orchestration (which should then be managed by an ESB), and another to progress-check/pick up the result. The front-end UI would need to reflect this initiate/progress-check distinction. In general, also, Frameworks that abstract out the API Gateway and don’t allow for modelling the data in the API Gateway (and therefore enforcing its structure according to the OpenAPI Specification OAS — formerly Swagger) should be avoided.
There are a few ways to deploying API connect going forward which should be considered:
- Full Kubernetes deployment — all subsystems (management, analytics, portal, gateway) runs one or more Kubernetes cluster.
- Partial Kubernetes deployment — some subsystems (management, analytics, portal) runs in one or more Kubernetes cluster, and the gateway subsystem runs on a DataPower (physical or virtual)
- Full VMware deployment — all subsystems (management, analytics, portal, gateway) deployed using VMware OVA files
- Cloud Pak for Integration is just one of the option of deploying in Kubernetes (ICP, RHOS) or pure K8s
Companies should consider developing skills in Docker, Kubernetes, OpenShift and a container orchestrator in any event, which is the keystone of how applications are hosted, and its influence stretches far beyond APIC and IBM.
Catalogs
When you publish an API Product to a catalog, that API Product becomes available on the Developer Portal that is associated with that catalog. An API Product can be published to multiple catalogs. You might have different catalogs for different consumers, such as one catalog for business partners and another for internal usage. You might also use different catalogs for continuous integration; for example, one catalog for preproduction activities, such as quality assurance and test and another catalog for production use.
Security
The terms in this section title are often overloaded in the IT industry and require clarification. In this post, these terms have the following meaning.
Authentication: An action used to determine the identity of a principal. A principal can be a human user, a machine, VM, system, or an application. A successful authentication determines the identity of the principal. It does not determine what actions the principal can undertake.
Authorization: Authorization determines whether an authenticated user is allowed to perform an action on a resource. Authorization decisions are made separately from authentication decisions.
Putting it all together
The remaining part of this blog will bring together the required steps to get IBM API Connect to integrate with Openshift on a fully controlled deployment lifecycle by leveraging the IBM Secure Gateway as a secure tunnel between the specific resources to connect on-premises context.
I have uploaded the source code to a GitHub repo at this location for you to follow along should you see the need to put this to the test.
We will be building the below components for you to follow along:
- Build the code based on the Swagger specification discussed during the designed phase; find at this location
- Deploy the artifact on a local instance of OpenShift by following this section Deploying to OpenShift
- Follow the steps from the Managing API Endpoints with IBM API Connect section
- Finally, follow the steps required to Create a Secure Gateway service with IBM Cloud section below.
Digital banking application inspired by Monzo banking
When thinking of business capabilities, our imaginary bank will need the following set of microservices running on OpenShift locally interfacing an IBM Secure Gateway running on IBM Cloud:
- Portal: Loads the UI and takes care of user sessions and relies on all other microservices for core functionality.
- Authentication: Handles user profile creation, as well as login & logout.
- Accounts: Handles creation, management, and retrieval of a user’s banking accounts.
- Transactions: Handles creation and retrieval of transactions made against users’ bank accounts.
- Bills: Handles creation, payment, and retrieval of bills.
- Support: Handles communication with Watson Assistant to enable a support chat feature.
Deploying to OpenShift
OpenShift is an open-source software; it is a family of containerization software developed by Red Hat acquired by IBM July 9, 2019. Its flagship product is the OpenShift Container Platform — an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. It is trusted by more than 1000 customers to deliver business-critical applications, whether they’re migrating existing workloads to the cloud or building new, cutting-edge experiences for their users.
We will leverage on its capability to deploy our banking API solution by:
- Creating a project and deploy pre-existing application container images.
- Building application container images from Dockerfile and deploy them.
- Implementing and extending application image builders.
- Using incremental and chained builds to accelerate build times.
- Making an application visible outside the OpenShift cluster to run on IBM Secure Gateway
- Automating builds by using a webhook to link OpenShift to a Git repository.
CodeReady Containers: Run OpenShift 4.x locally
./deploying_on_openshift.sh
Managing API Endpoints with IBM API Connect
IBM API Connect is an API management solution from IBM that offers capabilities to create, run, manage, and secure APIs and microservices. By using these capabilities, the full lifecycle of APIs for on-premises and cloud environments can be managed. We will walk you through a step-by-step guide on how to build your APIs coming from the design phase, starting with the API endpoints from the previous post.
Refer to this link for a full description of the API contract aligned with the target solution.
- Login into IBM Cloud to access the API Connect:
- Create a new API:
3. Select Import API from a file or URL:
4. Click “Select File” and navigate to location of swagger 2.0 yaml file on your file system
5. Select “Schemes”
6. Uncheck “http”.
7. Go to “Assemble”
8. click “Create Assembly”
9. Add an Invoke policy
10. Set the following properties for Invoke
11. URL: https://<host name of secure gateway service>:<secure gateway service port number>//$(request.path)$(request.search)
12. HTTP Method: Keep
13. Add a “set-variable” policy before the Invoke policy
14. Set the following properties:
- Action: Set
- Set: message.headers.host
- Type: string
- Value: <the hostname of the route for the service in OpenShift>
- Save changes. Publish API to Gateway and test.
Create a Secure Gateway service with IBM Cloud
The Secure Gateway Service allows access to backend service running on internal customer network from the cloud it manages the mapping between your local and remote destinations and monitors all of your traffic via a secure tunnel.
Create a Secure Gateway instance:
Then follow the instructions here to properly setup the connection to your on-premise resources:
Conclusion
We hope you have found this series of posts informative. We aimed to provide at least a tiny amount of relevant information to your quest of building a great API. Let’s quickly recap what we learned so far. Part one was about the importance of having a good Governance with “Planning an API Initiative Strategy and Governance Model” and “API Lifecycle” in mind. Part two provided an overview of API Design, their characteristics, business value, lifecycle, and strategy. Part three was to bring this home with an implementation approach. It stressed on the key topics such as Deploying to OpenShift, Managing API Endpoints, Secure Gateway Service, Testing Approaches, Implementation Approach, and Security. This was not mean to be exhaustive but enough to get you to understand the API strategy and its impact. We shared some relevant information in your quest of building a great API as you embark on your Journey to Cloud.
Attribution
Special thank you to Enrique (Ike) Relucio from IBM Garage ASEAN, who shared his knowledge of the banking industry and the API Economy. Thank you to Kok Sing Khong from Integration & Development Lead IBM Cloud and Cognitive Software who shared his expertise on API Connect. Also, thank you to Aldred Benedict from Blockchain Labs Developer, IBM Industry Platform who shared his expertise on OpenShift.
References
Bring your plan to the IBM Garage.
Are you ready to learn more about working with the IBM Garage? We’re here to help. Contact us today to schedule time to speak with a Garage expert about your next big idea. Learn about our IBM Garage Method, the design, development and startup communities we work in, and the deep expertise and capabilities we bring to the table.
Schedule a no-charge visit with the IBM Garage.