Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.
2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
Eliminate Human-Based Actions With Automated Deployments: Improving Commit-to-Deploy Ratios Along the Way
Establishing a Highly Available Kubernetes Cluster on AWS With Kops
Understanding how to organize a pipeline from development to operation has, in my experience, proven to be quite the endeavor. This tutorial aims to tackle precisely this challenge by guiding you through the required tools necessary to deploy your code as Docker containers by going through the steps involved in creating a simple "Hello, World!" application (although preexisting projects are also easily applicable for this approach). Whether you're a seasoned developer seeking to optimize your workflow or a newcomer eager to learn best practices, this tutorial will equip you with the knowledge and tools to streamline your development process effectively. Moreover, becoming proficient in this pipeline setup will greatly enhance your workflow, allowing you to deliver high-quality software faster, with fewer errors, and ultimately, better meet the demands of today's agile development environments. If you have come far enough to consider a pipeline for your project, I expect you to be familiar with some of the simpler tools involved in this process (e.g. Git, Java, Maven), and will not cover these in-depth. You may also enjoy: Building CI/CD Pipelines for Java Using Azure DevOps (Formerly VSTS) To go about making a pipeline for our "Hello, World!" application, the following subjects will briefly be covered: Azure DevOps Azure Repos Maven Git Azure Pipelines Docker To make things clear: Our goal is to be able to run docker run <dockerid>/<image>:<tag> while before, only having run git push on master. This is an attempt to create a foundation for future CI/CD implementations ultimately leading to a DevOps environment. Azure DevOps One of the prerequisites for this walk-through is to use the Azure DevOps platform. I can highly encourage the full package, but the modules Repos and Pipelines are the only ones required. So, if you have not already, you should sign yourself up and create a project. After doing so, we can proceed to the Repos module. Azure Repos This module provides some simple tools for maintaining a repository for your code. While a repository could easily be managed by something like GitHub, this module supports solid synergy between repositories and pipelines. After you click on the module, you will be met with the usual Git preface for setting up a repository. I highly recommend using the SSH methods for long-term usage (if this is unknown to you, see Connect to your Git repos with SSH). Now, after setting it up, you will be able to clone the repository onto your computer. Continuing, we will create a Maven project within the repository folder using IntelliJ IDEA (other IDEs can be used, but I will only cover IntelliJ) that ultimately prints the famous sentence, "Hello, World!" (for setting up a project with Maven, see Creating a new Maven project - IntelliJ). This should leave you with a project tree like so: Hello World project tree Finish off by creating a main class in src/main/java: Java x 1 public class Main { 2 public static void main(String[] args) { 3 System.out.println("Hello World!"); 4 } 5 } But before pushing these changes to master, a few things need to be addressed. Maven Maven provides developers with a powerful software management tool configurable from one location, the pom.xml file. Looking at the generated pom file in our project, we will see the following: XML xxxxxxxxxx 1 10 1 <?xml version="1.0" encoding="UTF-8"?> 2 <project xmlns="http://maven.apache.org/POM/4.0.0" 3 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 4 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 5 <modelVersion>4.0.0</modelVersion> 6 7 <groupId>surgo.testing</groupId> 8 <artifactId>testing-helloworld</artifactId> 9 <version>1.0</version> 10 </project> In our case, the only really interesting part of the pom file is the version tag. The reason is that upon pushing our source code to master, Maven will require a new version each time — enforcing good practice. As an extension, we need to make Maven create an executable .jar file with a manifest of where the main class is to be located. Luckily, we can just use their own Maven plugin: XML xxxxxxxxxx 1 33 1 <?xml version="1.0" encoding="UTF-8"?> 2 <project xmlns="http://maven.apache.org/POM/4.0.0" 3 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 4 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 5 <modelVersion>4.0.0</modelVersion> 6 7 <groupId>surgo.testing</groupId> 8 <artifactId>testing-helloworld</artifactId> 9 <version>1.0</version> 10 11 <properties> 12 <main.class>Main</main.class> 13 </properties> 14 15 <build> 16 <plugins> 17 <plugin> 18 <groupId>org.apache.maven.plugins</groupId> 19 <artifactId>maven-jar-plugin</artifactId> 20 <version>3.1.2</version> 21 <configuration> 22 <archive> 23 <manifest> 24 <addClasspath>true</addClasspath> 25 <classpathPrefix>lib/</classpathPrefix> 26 <mainClass>${main.class}</mainClass> 27 </manifest> 28 </archive> 29 </configuration> 30 </plugin> 31 </plugins> 32 </build> 33 </project> The only thing you might want to change is the name of the main class (line 12). Remember the package name, if not directly located in src/main/java (I prefer using properties, but you can insert the name directly in line 26 if you like). Lastly, before committing our additions to master, we will need to build the target folder which includes our .jar file. This can be done either directly through IntelliJ or in the terminal (if you have Maven installed). Simply press the lifecycle "package" in the UI, or run mvn package in the terminal. Upon finalization, a .jar file will have appeared in the target folder: This concludes the initial setup necessary for our pipeline and we can now finally push our changes to master. Git Most of you are probably quite familiar with Git, but I will go ahead and cover what needs to be done anyway. The Git tool provides us with a distributed version control system easily accessible from anywhere. Now provided we correctly configured our repository in Azure Repos, cloned it to our local computer, and initialized the IntelliJ project within that folder, it should be straightforward. As all of our added files have yet to be staged, run git add. This will stage every changed or added file. Then run git commit -m "initial commit" to commit the staged files. Lastly, run git push to push the committed files to master. You might now be wondering, "Has all the magic happened?" And the answer would be no. In fact, not much has happened. We have created a repository and filled it with a Maven project that prints "Hello, World!" when invoked, which in all honesty, is not much of an achievement. But, more importantly, we have established a foundation for our pipeline. Azure Pipelines Pipelines, the star of the show, provides us with build and deployment automation. It enables us to customize what should happen whenever a build is triggered (in our case by pushing to master). Let me take you through the process of setting up a simple pipeline. Step 1: First, go to the Azure DevOps Pipeline module. This will present you with a single button "Create Pipeline," press it. Step 2: We will now be prompted for the location of our code, and since we used Azure Repos, press "Azure Repos Git." Step 3: It will now look through your repositories. Press the one you pushed the Maven project onto. Step 4: Since it is a Maven project, select "Maven."You should now be presented with the following azure.pipelines.yml file: HXML xxxxxxxxxx 1 22 1 # Maven 2 # Build your Java project and run tests with Apache Maven. 3 # Add steps that analyze code, save build artifacts, deploy, and more: 4 # https://docs.microsoft.com/azure/devops/pipelines/languages/java 5 6 trigger: 7 - master 8 9 pool: 10 vmImage: 'ubuntu-latest' 11 12 steps: 13 - task: Maven@3 14 inputs: 15 mavenPomFile: 'pom.xml' 16 mavenOptions: '-Xmx3072m' 17 javaHomeOption: 'JDKVersion' 18 jdkVersionOption: '1.8' 19 jdkArchitectureOption: 'x64' 20 publishJUnitResults: true 21 testResultsFiles: '**/surefire-reports/TEST-*.xml' 22 goals: 'package' Do not think too much about the semantics of the file. The important thing to know now is that the trigger is set to master and the steps include a task for Maven. For more information about the Maven inputs, see Maven task. Step 5: If everything looks in order, press "save and run" in the top-right corner to add the azure.pipelines.yml file to the repository. The pipeline will then be activated and run its first job. Docker Docker, the final piece of the puzzle, provides us with an OS-level of virtualization in the shape of containers with lots of versatility and opportunity. We need this tool to deploy our builds on machines and luckily, it is greatly integrated into the Azure DevOps platform. To fully utilize its many capabilities, you will need to register on the DockerHub. Step 1: After registration, create a repository with the name of your application. Then choose whether or not to make it public (you can only have one private repository with the free plan). Step 2: Next, we need to authorize DockerHub into our Azure DevOps project. To do this go back to Azure DevOps and click on 'Project Settings' in the bottom-left corner. Step 3: Choose "Pipelines/Service Connections*". Step 4: Now click on the top-right button "New service connection" and search for Docker registry. Mark it, and hit next. Step 5: Choose "Docker Hub" as the registry type. Step 6: Fill in the remaining fields (the service connection name is up to you). You should now be able to see your entry below "Service Connections." The connection will make itself relevant later, but for now, we need to go back to the project and add a few things. Since we added the azure.pipelines.yml file to the repository, a git pull needs to be called to pull the newest changes. Furthermore, we need to define our Docker image using a Dockerfile. Step 7: Create a new file in the root of the project and name it "Dockerfile." Your project tree should now look something like this: Project Tree with addition of Dockerfile The Dockerfile should be considered a template for containers much like classes are for objects. What needs to be defined in this template is as follows: We need to set a basis for the virtual environment (FROM openjdk:8). We need to copy our .jar file onto the virtual environment (COPY /target/testing-helloworld-?.?*.jar .). We need to run the .jar file upon initialization (CMD java -jar testing-helloworld-?-?*.jar). You should now have a file looking similar to this: Dockerfile x 1 FROM openjdk:8 2 COPY /target/testing-helloworld-?.?*.jar . 3 CMD java -jar testing-helloworld-?.?*.jar The regex simply accounts for different versions being deployed, but the actual name has to match the .jar file from the target folder. Update the azure.pipelines.yml File To sum up our current progress, we have now made a Maven project, linked it to a pipeline, and created a template for the virtual environment. The only thing missing is to connect everything via the azure.pipelines.yml file. Step 1: Add Variables We will need to add some variables for the DockerHub connection, as well as the ever-changing version number to the azure.pipelines.yml file (insert your Service Connection and Docker repository): HXML xxxxxxxxxx 1 1 ... 2 variables: 3 containerRegistryServiceConnection: saban17-testing 4 imageRepository: saban17/testing-helloworld 5 tag: 1.0.0 6 ... These variables are not strictly necessary, but it never hurts to follow the DRY principle. Step 2: Add Tasks to Pipeline Steps Next, we need to add more tasks to our pipeline steps. What needs to happen is to log in to Docker, build the Dockerfile previously defined, and push the image to our DockerHub repository. One at a time, we add the wanted behavior starting with the Docker login: HXML xxxxxxxxxx 1 1 - task: Docker@2 2 displayName: dockerLogin 3 inputs: 4 command: login 5 containerRegistry: $(containerRegistryServiceConnection) Then the Docker build: HXML xxxxxxxxxx 1 1 - task: Docker@2 2 displayName: dockerBuild 3 inputs: 4 repository: $(imageRepository) 5 command: build 6 Dockerfile: Dockerfile 7 tags: | 8 $(tag) And lastly, the Docker push: HXML xxxxxxxxxx 1 1 - task: Docker@2 2 displayName: dockerPush 3 inputs: 4 command: push 5 containerRegistry: $(containerRegistryServiceConnection) 6 repository: $(imageRepository) 7 tags: | 8 $(tag) You should now have an azure.pipelines.yml file looking similar to this (with the addition of mavenAuthenticateFeed:true in Maven@3 inputs): HXML xxxxxxxxxx 1 48 1 trigger: 2 - master 3 4 pool: 5 vmImage: 'ubuntu-latest' 6 7 variables: 8 containerRegistryServiceConnection: saban17-testing 9 imageRepository: saban17/testing-helloworld 10 tag: 1.0.0 11 12 steps: 13 - task: Maven@3 14 inputs: 15 mavenPomFile: 'pom.xml' 16 mavenOptions: '-Xmx3072m' 17 javaHomeOption: 'JDKVersion' 18 jdkVersionOption: '1.8' 19 jdkArchitectureOption: 'x64' 20 publishJUnitResults: true 21 mavenAuthenticateFeed: true 22 testResultsFiles: '**/surefire-reports/TEST-*.xml' 23 goals: 'package' 24 25 - task: Docker@2 26 displayName: dockerLogin 27 inputs: 28 command: login 29 containerRegistry: $(containerRegistryServiceConnection) 30 31 - task: Docker@2 32 displayName: dockerBuild 33 inputs: 34 repository: $(imageRepository) 35 command: build 36 Dockerfile: Dockerfile 37 tags: | 38 $(tag) 39 40 - task: Docker@2 41 displayName: dockerPush 42 inputs: 43 command: push 44 containerRegistry: $(containerRegistryServiceConnection) 45 repository: $(imageRepository) 46 tags: | 47 $(tag) 48 Understandingly, this might be a little overwhelming; but fear not, it looks more complicated than it really is. For more information about these inputs see Docker task. Push to the Pipeline Finally, now we get to see the magic happen. However, before doing so, I need to tell you the routine procedure that is to push to the pipeline: Step 1: Go into the pom.xml and the azure.pipelines.yml files, and increment the version number. Step 2: Run the Maven lifecycle clean to remove earlier .jar files in the target folder. Step 3: Run the Maven lifecycle package to build and package your code (creating the new .jar file). Step 3: Provided you are on the master branch, run the Git commands: git add . git commit -m "commit message" git push Step 4: Check whether or not the job passes in the pipeline. If everything went as it should, you have now uploaded an image with your .jar file to the associated Docker Hub repository. Step 5: Running this image now only requires the host to have Docker installed. Let us try it! Running Docker Hub repository The input (a) initiates a container from the requested repository. The image was then retrieved, instantiated, and processed with the final result (b) displaying "Hello World!" This concludes the guide for setting up your Java Pipeline with Azure DevOps and Docker. Conclusion By now, it should hopefully be clear why this approach has its benefits. It enables the developer to form a run-time environment (Dockerfile) and upload it to operation with simple to no effort (git push). While it has not been covered, this approach also creates artifacts in Azure DevOps, which is very useful when using something like Maven, as it makes dependencies surprisingly easy to manage. Since this approach only recently made it into our team, it is still under development and a lot of additions are still to be made. I highly encourage you to further expand upon your pipeline by making it fit your exact needs. I hope this guide has proven to be useful as well as practical, and should you have any further questions, feel free to comment.
The complexity of managing business-critical Microsoft SQL Server workloads with Kubernetes, especially across hybrid and multi-cloud infrastructure, can stall many organizations’ modernization plans. However, DH2i's new DxOperator tool aims to change that dynamic with automated deployment and simplified management capabilities purpose-built for SQL Server availability groups in Kubernetes environments. As DH2i CEO Don Boxley explained, a major goal was streamlining Kubernetes for the SQL Server user base: "Inexperience with Kubernetes is a common barrier preventing seasoned IT pros from being able to harness the massive scalability, flexibility, and cost savings benefits of SQL Server containerization." By reducing deployment time from 30 minutes to "just a few minutes," DXOperator unlocks the path to containerization. Easing SQL Server High Availability Burdens For mission-critical SQL Server workloads where uptime is paramount, the complexity of configuring high availability can also impede cloud and container migration. Here, too, DxOperator steps in to remove roadblocks. "SQL Server hosts an organization’s most business-critical workloads, so Kubernetes’ inherent failover latency (5-minute downtime windows for pod redeployment) was totally inadequate for organizations that are often shooting for ‘5 nines’ uptime levels," Boxley said. DxOperator builds on DH2i's existing DxEnterprise software that has already been providing microsecond-fast SQL Server failover for over a decade. This same capability now extends to availability groups in Kubernetes as well, enabling true high availability. Streamlining Multi-Cloud and Hybrid Cloud Potential What makes DxOperator uniquely valuable is its consistent approach across on-premises, public cloud, and hybrid infrastructure. As Boxley explained, "Our customers’ environments are only growing more heterogeneous, and we’ve always committed ourselves to infrastructure agnostic, cross-platform software solutions." DxOperator simplifies configuration and maintains availability guarantees regardless of the underlying platform — including hybrid cloud and multi-cloud architectures. The automation flexibly adapts to any target environment. "Previously, ambitious goals such as cloud vendor diversification have represented nothing more than a pipe dream aspiration for most companies. However, we’ve witnessed evidence that organizations are increasingly pursuing any available avenues for increased SQL Server resiliency, especially as cross-platform technologies like DxEnterprise are emerging to simplify and support these complex environments," said Boxley. Key DxOperator Capabilities and Integration So what exactly does DxOperator do under the hood? As a Kubernetes controller, it's natively integrated to take advantage of core constructs like pods and nodes. The tool handles numerous complex configurations such as storage settings, container resource allotments, pod naming conventions, and configurable synchronous or asynchronous replication. "Deployment automation handles complex configurations like custom annotations, specific container specifications, and quality of service parameters, and makes deployment achievable in just a few minutes, unlocking peak scalability," Boxley summarized. For infrastructure-as-code and DevOps best practices, DxOperator adoption is simple, with the ability to easily script deployment and orchestration using Kubernetes standards like Helm charts. Meanwhile, backend monitoring, logging, and troubleshooting leverage native Kubernetes interfaces. "DxOperator is the first class Kubernetes controller for SQL Server containers. Thus, it monitors and takes real-time corrective actions on SQL Server containers if a failure is detected for any of the managed SQL containers. Every event, every action, and every health anomaly are fully logged. ‘Kubectl logs’ can be used on DxOperator or SQL Server container at any time to diagnostics or log scraping," Boxley explained. Getting Started With SQL Server and Kubernetes For those just getting started running SQL Server on Kubernetes, simplicity, and automation should be top priorities, according to Boxley: "Your best approach is to get familiar with the solutions available that can help reduce the learning curve and integrate as much automation as possible. For example, DxOperator is a highly powerful tool with its ability to automate SQL Server Availability Group deployment and handle complex configurations." Complementary tools like SUSE Rancher can make Kubernetes administration easier as well through its graphical user interface. Taking advantage of robust, specialized solutions for deploying and managing SQL Server workloads allows organizations to overcome knowledge gaps and focus on business applications rather than infrastructure complexities. By leveraging DxOperator for simplified, automated SQL Server high availability configuration on Kubernetes, the promise of seamless hybrid and multi-cloud flexibility comes closer to reality even for demanding mission-critical workloads. Innovation often arises not from building entirely new things but rather from perfecting and expanding access to existing technologies. For SQL Server HA on Kubernetes, that innovation is DxOperator.
What if you could eliminate productivity obstacles and accelerate delivery from code to production through automated Azure pipelines? The promise of cloud-powered DevOps inspires, yet its complexities often introduce new speed bumps that hamper release velocity. How can development teams transcend these hurdles to achieve continuous delivery nirvana? This guide will illuminate the path to mastery by unraveling the mysteries of Azure Pipelines. You’ll discover best practices for optimizing build and release workflows that minimize friction and downstream delays, unlocking your team’s true agile potential. Simplifying Continuous Integration/Continuous Delivery (CI/CD) Progressive teams once struggled to harness ad hoc scripts, brittle optimizing integration and delivery machinations managing software projects at scale. Azure Pipelines delivers turnkey CI/CD, automating releases reliably through workflows, portably abstracting complexity, and saving hundreds of hours better spent building products customers love. Intuitive pipelines configure triggers, setting code commits or completion milestones into motion, executing sequential jobs like builds, tests, approvals, and deployments according to codified specifications adaptable across environments standardizing engineering rhythm. Integration tasks compile libraries, run quality checks, bundle executables, and publish artifacts consumed downstream. Deploy jobs, then release securely on-prem, multi-cloud, or Kubernetes infrastructure worldwide. Configurable settings give granular control, balancing agility with oversight throughout The software lifecycle. Syntax options toggle manual versus automatic approvals at stage gates while failure policies customize rollback, retry, or continue logic matching risk appetite to safeguard continuity. Runtime parameters facilitate dynamic bindings to frequently changing variables. Azure Pipelines lifts engineering out of deployment darkness into a new era of frictionless productivity! Embedding Quality via Automated Testing Processes Progressive teams focused on chasing innovation often delay hardening software vital to preventing downstream heartaches once systems touch customers. Azure Test Plans embeds robust quality processes directly within developer workflows to catch issues preemptively while automated testing maintains protection consistency, guarding against regressions as enhancements compound exponentially over time. Test plans manage test cases, codifying requirements, validation criteria, setup needs, and scripts developers author collaboratively while sprinting new features. Execution workflow automation links code check-ins to intelligently run related test suites across browser matrices on hosted lab infrastructure without occupying local computing capacity. Tests also integrate within pipelines at various test/staging environments, ensuring capabilities function integrated end-to-end before reaching production. Rich analytics dashboards detail test pass/fail visualizing coverage and historical trends and prioritize yet-to-be mitigated defects. Integrations with partner solutions facilitate specialized test types like user interface flows, load testing, penetration testing, and rounding out assessment angles. Shipping stable and secure software demands discipline; Azure DevOps Server Support Test Plans turn necessity into a habit-forming competitive advantage! Monitoring App Health and Usage With Insights Monitoring application health and usage is critical for delivering great customer experiences and optimizing app performance. Azure Monitor provides invaluable visibility when leveraged effectively through the following approaches: Configure app health checks: Easily set up tests that probe the availability and response times of application components. Catch issues before customers do. Instrument comprehensive telemetry: Trace transactions end-to-end across complex Microservices ecosystems to pinpoint frictions impacting user workflows. Aggregate logs centrally: Pull together operational signals from networks, web servers, databases, etc., into intuitive PowerBI dashboards tracking business priority metrics from marketing clicks to sales conversions. Analyze usage patterns: Reveal how customers navigate applications and uncover adoption barriers early through engagement telemetry and in-app surveys. Tying app experiences to downstream business outcomes allows data-driven development that directly responds to real customer needs through continuous improvement. Collaborating on Code With Azure Repos Before teams scale delivering innovation consistently, foundational practices optimizing productivity and reducing risks start with version-controlling code using robust repositories facilitating collaboration, backed-up assets, and reproducible builds. Azure Repos delivers Git-based repositories securing centralized assets while supporting agile branch management workflows, distributing teams’ access projects in parallel without colliding changes. Flexible repository models host public open source or private business IPs with granular permissions isolation. Developer clones facilitate local experimentation with sandbox branch merging upon review. Advanced file lifecycle management automates asset cleanup while retention policies persist historical snapshots cost-efficiently. Powerful pull requests enforce peer reviews, ensuring changes meet architectural guidelines and performance standards before being accepted into upstream branches. Contextual discussions thread code reviews iterating fixes resolving threads before merging, eliminating redundant issues escaping the down cycle. Dependency management automatically triggers downstream builds, updating executables ready for staging deployment post-merge publication. Share code confidently with Azure Repos! Securing Continuous Delivery With Azure Policies As code progresses, staged environments ultimately update production, consistency in rollout verification checks, and oversight access prevent dangerous misconfigurations going uncaught till post-deployment when customers face disruptions. Azure Pipelines rely on Azure Policies extending guardrails and portably securing pipeline environments using regularization rules, compliance enforcement, and deviation alerts scoped across management hierarchies. Implementing robust security and compliance policies across Azure DevOps pipelines prevents dangerous misconfigurations from reaching production. Azure Policy is invaluable for: Enforcing pipeline governance consistently across environments. Automating Cloud Security Best Practices. Monitoring configuration states against compliance baselines. Codify Pipeline Safeguards With Azure Policy Specifically, leverage Azure Policy capabilities for: Restricting pipeline access to only authorized admins. Mandating tags for operational consistency. Limiting deployment regions and resource SKUs. Policies also scan configurations alerting when controls drift from the desired state due to errors. Automated remediation then programmatically aligns resources back to compliant posture. Carefully Orchestrate Production Upgrades Smoothly rolling out updates to mission-critical, global-scale applications requires intricate staging that maintains continuity, manages risk tightly, and fails safely if issues emerge post-deployment: Implement canary testing on small pre-production user cohorts to validate upgrades and limit blast radius if regressions appear. Utilize deployment slots to hot-swap upgraded instances only after health signals confirm readiness, achieving zero downtime. Incorporate automated rollback tasks immediately reverting to the last known good version at the first sign of problems. Telemetry-Driven Deployment Analysis Azure Monitor plays a powerful role in ensuring controlled rollout success by providing the following: Granular instrumentation across services and dependencies. Holistic dashboards benchmarking key app and business health metrics pre/post deployments. Advanced analytics detecting anomalous signals indicative of emerging user-impacting incidents. Together, these capabilities provide empirical confidence for innovation at scale while preventing disruptions. Monitoring proves most crucial when purpose-built into DevOps pipelines from the initial design stage. Pro Tip: Perfect health signals assessing key user journeys end-to-end, combining app load tests, dependencies uptime verification, and failover validations across infrastructure tiers, detecting deterioration points before manifesting user-facing. Actionable Analytics Powering Decision-Making Actionable analytics empower data-driven decisions by optimizing software delivery by: Translating signals into insightful recommendations that focus on priorities. Answering key questions on whether programs trend on time or risk falling behind. Visualizing intuitive PowerBI dashboards aggregated from 80+ DevOps tools tracking burndown rates, queue depths, and past-due defects. Allowing interactive slicing by team, priority, and system to spotlight constraints and targeted interventions to accelerate outcomes. Infusing predictive intelligence via Azure ML that forecasts delivery confidence and risk, helping leaders assess tradeoffs. Gathering real-time pulse survey feedback that reveals challenges teams self-report to orient culture priorities. Driving Data-Informed Leadership Decisions Robust analytics delivers DevOps success by: Quantifying productivity health and assessing program timelines to meet strategic commitments. Identifying delivery bottlenecks for surgical interventions and removing impediments. Forecasting team capacity, shaping staffing strategies, and risk mitigations. Monitoring culture signals to ensure priorities align with participant feedback. Sustaining Analytics Value To sustain analytics value over time: Measure analytics usage directly in decisions assessing the utility. Iterate dashboards incorporating leader user feedback on enhancing relevance. Maintain consistency in longitudinal tracking, avoiding frequent metric definition churn. Nurture data fluency-building competencies by adopting insights trustfully. Let data drive responsive leadership sacrifice and celebration in balance. Outcomes accelerate when data transforms decisions! Onwards, creative coders and script shakers! What journey awaits propelled by the power of Azure Pipelines? Dare mighty expeditions to materialize ambitious visions once improbable before Cloud DevOps elevated delivery to wondrous new heights. How will your team or company innovate by leveraging Azure’s trusty wisdom and harnessing the winds of change? Let us know in the comments below.
However, these intricacies are encapsulated behind the scenes with SAP Build Apps. Regardless of their coding expertise, users can leverage intuitive visual interfaces to rapidly build applications. In this article, we’ll delve into the details of low coding, discover what SAP Build Apps is, and examine practical technology applications in business. What Is Low-Code/No-Code? Coding applications from scratch is a time-consuming process that requires in-depth, often specialized know-how (on the development platform). Low-code development platforms and tools (and sometimes no-code solutions) can simplify this process by removing much of the manual coding traditionally required in software development — this is similar to what happened to programming languages when object-oriented programming was introduced and took over the popularity of procedural programming languages. These platforms enable users to create applications with minimal hand-coding by leveraging visual interfaces, pre-built components, and declarative logic. No code means developing apps and software applications without writing program code. Low-code, on the other hand, requires elementary programming knowledge or a basic technical understanding. Below, you can find a brief description of the low-code and no-code approaches compared with a traditional pro-code software engineering approach considering development complexity and flexibility. The implementation takes place on specially designed no-code or low-code platforms. For example, mobile application development is typically characterized by the following: Convenient web-based development environment access Graphical user interfaces Drag-and-drop approach Ready-made templates and reusable components Support for Android/iOS builds The special feature of the low-code principle is that businesses can create applications with very little programming knowledge. Therefore, the global low-code development platform market is on a significant growth trajectory. Forecasts indicate a substantial revenue increase, soaring from $10.3 billion in 2019 to an impressive $187.0 billion by 2030. This market is expected to grow rapidly, with a noteworthy compound annual growth rate (CAGR) of 31.1% from 2020 to 2030. The COVID-19 pandemic has also increased the need for organizations to automate processes and prioritize digital transformation initiatives. Leveraging the power of automation and low-code solutions, businesses can rapidly deploy innovative applications and accelerate their digital transformation journey while minimizing development complexities. SAP Solution for Enterprise Software Development As a foundation for the intelligent, sustainable enterprise, the SAP Business Technology Platform (SAP BTP) brings together data and analytics, artificial intelligence, application development, automation, and integration in a unified environment. SAP has come a long way in developing a unique infrastructure. In 2021, the low-code pioneer AppGyver was acquired by SAP. It was a low-coding development platform that offered users visual tools for building applications without extensive coding. The distinctive features of AppGyver included a drag-and-drop interface builder, pre-built templates for common app components, and robust integration capabilities for connecting with various data sources. Since then, AppGyver has been part of SAP's business process intelligence portfolio as a Software as a Service SaaS solution until November 2022. SAP carried out the acquisition to extend the full-service offering, “RISE with SAP.” With RISE, SAP intended to provide its customers with the best possible support for digitalization and cloud migration. The SAP no-code, low-code, and pro-code solutions in the BTP environment are represented by the SAP Build family of products and tools, including the following: SAP Build Apps User-friendly environment: SAP Build Apps provides a user-friendly environment for application development, allowing users of all skill levels to create apps through drag-and-drop actions. Visual creation of logic: Users can visually create data models and business logic without the need for complex code tables, streamlining the development process. Preconfigured components: Build Apps offers preconfigured components, connectors, and integrations, facilitating seamless connections with both SAP and non-SAP solutions. SAP Build Process Automation Code-free automation: SAP Build includes process automation capabilities, automating workflow processes within a company without the need for coding. AI Capabilities: Automation uses an RPA as one of the capabilities to reduce manual efforts required for performing some business processes. Flexibility and adaptability: SAP BTP Build Apps aims to improve flexibility and adaptability to changes in the market by facilitating the development process. SAP Build Work Zone Centralized access: The SAP Build Work Zone combines SAP Launchpad and SAP Work Zone to create business pages with central access to applications. Drag-and-drop customization: Web interfaces can be easily customized using drag-and-drop actions, ensuring adaptability to specific needs. Cross-departmental collaboration: Members across departments can develop and share business pages accessible from desktops, mobile tablets, and other devices. It extends beyond employee access, allowing customers and partners to utilize specific services. SAP Build Code Generative AI-based code development with Joule copilot, optimized for Java and JavaScript application development for code-first users, allows them to build visually. Integrated service center (for APIs, services): Centralized hub for managing and orchestrating project APIs and services, offering developers a unified interface for seamless configuration and interaction. Guided tutorials and templates for projects: Comprehensive tutorials and templates streamline project setup, providing step-by-step instructions and reusable structures for accelerated development. Orientation on trending SAP cloud technologies for services and mobile development: Specialized support for developers using SAP CAP (Node.js, Java), SAP MDK, and BTP Mobile Services, including documentation, examples, and troubleshooting guidance. Natural language prompted generation: User-friendly features enabling developers to create data models, services, UI annotations, and function logic through natural language prompts, simplifying the development process. Unit tests and sample data generation: Built-in tools for unit tests and sample data generation ensure code reliability and facilitate testing with realistic scenarios. Automated code review and suggestions: An intelligent system automatically reviews code, providing suggestions to enhance quality and maintainability, fostering robust and error-free software development. Build Apps, Build Process Automation, and Build Work Zone and Build Code form a common environment. With SAP BTP Build, users receive a contact point for all solutions and can implement their development tasks for greater agility. Since the offer consistently relies on low code and everything can be created visually, employees with no programming knowledge are also included. With Build Code, they can develop flexible and scalable applications. This is intended to promote collaboration within the company across departmental boundaries. SAP Build Apps Pros and Cons The Main Advantages of SAP Build Apps Scalability and Future-Proofing As a part of the Intelligent Enterprise concept, SAP Build Apps provides a scalable solution that can grow with the evolving needs of the business. The platform's constant development ensures that it remains at the forefront of technology trends, offering a future-proof solution for long-term application development strategies. Faster Time-to-Market The cost-effective and low-code nature of SAP Build Apps contributes to faster development cycles, reducing the time it takes to bring new applications to market. This agility can be a significant advantage in dynamic business environments where rapid innovation and quick adaptation to market changes are crucial. Enhanced Collaboration Across Teams With the elimination of extensive training requirements and the need for deep programming knowledge, SAP Build Apps fosters collaboration across diverse teams within an organization. Business users, subject matter experts, and IT professionals can collaborate more effectively in the application development process, improving cross-functional teamwork. Adaptability to Changing Business Requirements The graphical user interface and pre-built templates empower professional programmers to address standardized software requirements rapidly. This adaptability ensures that businesses can respond swiftly to changing market conditions, customer demands, or regulatory requirements without significantly overhauling their application landscape. Moreover, rapid extension development facilitates seamless integration with existing SAP applications, allowing businesses to adapt and enhance their digital ecosystem swiftly. These advantages collectively contribute to SAP Build Apps' standing as a leading low/no-code platform, offering not only cost-effectiveness and ease of use but also improved ROI. Single Point of Entry (SAP BTP) Centralizing the development process through SAP BTP streamlines project management, ensuring a unified and efficient approach to application development from inception to deployment. Furthermore, integrating AI and ML in business processes enhances efficiency and automates repetitive tasks. If your organization leverages BTP for extension scenarios, integration, and application development, Build Apps could be the ideal strategy for your lightweight mobile application development needs. Uncomplicated Deployment on Different OS The capability to deploy seamlessly on various operating systems simplifies the deployment process, enhancing accessibility and usability for a broader user base. Same as in typical hybrid approaches, you can develop an application once and deploy it seamlessly as either a web-based application or as versions compatible with iOS and Android platforms. SAP Build Apps also has a few disadvantages. Complex Customizations While SAP Build Apps offers a user-friendly environment for application development, it may face challenges when dealing with highly complex or unique customizations. Advanced and intricate business requirements may require additional coding or customization beyond the capabilities of the low/no-code platform. Integration Challenges While SAP Build Apps emphasizes seamless integration with SAP and non-SAP solutions, businesses with complex existing IT landscapes may encounter challenges in integrating the platform with certain legacy systems or third-party applications. Integration complexity can be a limitation for organizations with diverse technology stacks. For such cases, a dedicated middleware layer is a must-have. Dependency on the SAP Ecosystem The effectiveness of SAP Build Apps may be closely tied to the overall SAP ecosystem. Organizations heavily invested in SAP solutions may find it advantageous, but those relying on a broader range of technologies might face limitations in interoperability and integration with non-SAP environments. This can impact the platform's suitability for businesses with diverse technology stacks. Skills and Expertise for Low Coding: Who Can Use SAP Build Apps? Both citizen developers and professional developers can benefit from SAP Build Apps. Let’s first examine their responsibilities within the workflow. A professional developer prepares the landscape and architect data model and ensures the required integration and security are in place. Responsibilities: Understands the development lifecycle concept, testing, versioning, and maintenance. Supports systems Integration Ensures governance and security Plans and implements authentication Performs the customization of components and UI design. Citizen developers can use SAP Build in a specific domain to create applications and automate processes without a computer science background. Responsibilities: Focuses on Business Logic Creates and publishes low-code/no-code apps and customizations with close links to business contexts. Creates prototypes and POCs to check and verify ideas. Key SAP Build Apps Features and Possibilities Companies rely on rapid app development and implementing all requirements, even if they want to commission non-programmers to carry out the project. SAP Build Apps intends to make this possible with the visual creation of prototypes to test the user interfaces of newly designed apps as early as possible. The low-code-based approach is intended to help involve as many employees as possible from different departments in the development, even without programming knowledge. The low-code app development process with SAP Build Apps stands out through its visual development environment, minimal coding requirements, and prebuilt components. Automated tools and rapid iterations facilitate testing. SAP expertise enhances development by providing specialized tools, optimized integrations, and efficient troubleshooting capabilities. How To Build and Innovate With SAP LeverX experts suggest a step-by-step process: 1. Define Requirements Clearly define the requirements of your application, including data sources, user interfaces, and business logic. Leverage SAP Fiori Design Guidelines within SAP Build Apps to create responsive and user-friendly UIs easily through a low-code approach. 2. Model Data and Logic Design the data model and business logic of your application. Implement required connectors and models within SAP Build Apps. Implement and configure authorization mechanisms and set up required services within SAP BTP. 3. Integration and Extensions Integrate your application with other systems and extend functionality if necessary. Utilize pre-built connectors and integration tools within SAP Build Apps to seamlessly connect your low-code app with SAP and non-SAP systems, reducing development time and effort. 4. Test and Deploy Test your application thoroughly and deploy it to a productive environment. Leverage built-in testing and deployment tools in SAP Build Apps to ensure the reliability and scalability of your low-code application, accelerating the time-to-market. Overall, SAP Build Apps offers a streamlined, end-to-end, low-code development process featuring drag-and-drop UI development and extensive integration capabilities. These distinctive features contribute to improved user experience, faster development cycles, and efficient integration, providing tangible business benefits for organizations embracing low-code development with SAP Build Apps. SAP Build Apps Use Cases As a low-code provider, SAP pays particular attention to complex case management and automation. Let’s examine a few use cases of the SAP Build Apps application in the specified areas. Orders management (S/4HANA extension) Problem: The complexity in system object processing, limitations in consistently utilizing standard S/4HANA interfaces, and a requirement to employ a consistent pattern/template when generating orders. Solution: A user-friendly mobile app that effortlessly merges data from various origins into a unified business scenario. Use case: An enterprise using SAP S/4HANA for order management streamlines the entire order-to-cash process. The mobile application captures customer orders, updates inventory in real time, and integrates with financial systems for accurate billing. The unified platform provides visibility into order status, inventory levels, and financial transactions, enhancing operational efficiency and customer satisfaction via lightweight and simple-to-use mobile applications. Products Catalog Integrations (Ariba Integration) Problem: Absence of search functionality. The requirement to establish connections with external catalogs. Solution: Straightforward searching is possible through Catalogue API capabilities and external data providers. Use case: In procurement, a company employs SAP Ariba to integrate product catalogs from various suppliers. Buyers can access Ariba's centralized, up-to-date catalog, enabling them to easily compare products, prices, and specifications via mobile devices accessible anytime and anywhere. This integration streamlines the purchasing process, ensures data accuracy, and facilitates better supplier negotiation, which is particularly advantageous for users who are frequently on the move and need information or services on the go. Plant Maintenance (Data Collected Using SAP Datasphere) Problem: The need for convenient access to data gathered from machinery and consolidated within a single data warehouse. Solution: Effortlessly view machinery statuses on the mobile app for rapid access. Use case: A manufacturing facility uses SAP Datasphere for storage of the Work Orders data and data retrieved from different sensors and machinery. The application monitors equipment health in real time and can predict potential failures using IoT data. This ensures that employees have the most current information at their fingertips, facilitating better decision-making. Mobile updates, alerts, or personalized messages can help users schedule maintenance activities proactively. This predictive maintenance approach minimizes downtime, reduces maintenance costs, and extends the lifespan of critical assets. Warehouse Operator Apps (Third-Party App Integrations) Problem: Making warehouse position reservations using a complex third-party desktop application for warehouse operators. Solution: An app that consolidates data from various sources, simplifying the process of booking materials from a mobile device. Use case: A logistics company integrates a third-party warehouse operator app with SAP to optimize warehouse operations. Warehouse operators can access the app and SAP functionalities from mobile devices, manage inventory, track shipments, and generate reports while on the warehouse floor. Integration with SAP ensures data consistency across the supply chain, enhances order fulfillment accuracy, and improves overall warehouse efficiency. Education Plans Tracking (SuccessFactors Integration) Problem: The absence of a mobile and easy way to access corporate learning plans, monitor progress in a custom way, and track important milestones. Solution: Mobile app for corporate learners. Use case: An educational institution utilizes SAP SuccessFactors to track and manage education plans for students and faculty. The application monitors academic progress, sets learning objectives, and supports performance reviews. The mobile format facilitates quick status updates and fosters collaboration, contributing to a more synchronized e-learning process. SuccessFactors aids in talent development and ensures alignment with organizational goals in the education sector. Standalone Apps (AI-Enabled Solutions) Problem: Lack of automation and AI capabilities in the classical “pen and paper” solutions. Solution: Easy to use, extendable solution, which can be integrated with modern AI capabilities. Use case: A company deploys standalone AI solutions developed with SAP technology to enhance decision-making. For instance, an AI-driven analytics app powered by SAP technologies can analyze large datasets, identify patterns, and provide actionable insights. Mobile solutions enable real-time data updates and complement existing SAP systems, offering advanced analytics capabilities for strategic decision support. Given the versatile nature of SAP's modular approach, SAP Build Apps allows organizations to address specific requirements within different functional areas while ensuring seamless connectivity across the enterprise landscape. SAP Build Apps FAQ Q: Do I need to code somewhere in the SAP Build Apps process? If so, when/where exactly? A: Yes. However, coding is minimized due to its low-code nature. You can build applications using visual tools, drag-and-drop interfaces, and predefined components without extensive coding. For example, suppose you require advanced customization, integration with external systems, or specific business logic beyond the capabilities of visual tools. In that case, you may need to use low-code approaches or write code in those particular scenarios. Overall, SAP Build Apps aims to provide a streamlined development process with minimal coding while allowing flexibility for coding when needed. Q: What if I want to modify some features or components? Does this tool allow me to do this? What are the limitations? A: SAP Build Apps allows you to modify features and components to a certain extent. The platform offers flexibility for customization through low-code approaches, enabling users to adapt predefined components and modify features using visual tools. If your customization requirements extend beyond the capabilities of the provided visual tools, you may encounter constraints in achieving highly specialized modifications. In such cases, you might need to consider more extensive coding or explore other SAP development tools to address specific and intricate customization needs. It's essential to assess the complexity of your modifications and leverage the platform accordingly, balancing ease of use with the level of customization required. Q: What other SAP solutions/systems can SAP Build Apps be integrated with? A: Through the open interfaces (REST/OData), applications designed and implemented in SAP Build Apps can be seamlessly integrated with key SAP solutions and systems, including S/4HANA, SuccessFactors, Ariba, Integration Suite, Process Automation, Datasphere, and the SAP Business Technology Platform AI capabilities. This enables developers to create custom applications aligned with specific business needs across various domains, such as user experience, enterprise resource planning, human capital management, procurement, and advanced data processing. Integration with these SAP solutions provides a versatile development environment within the SAP ecosystem. Q: Can I integrate SAP Build Apps with other third-party systems? A: Yes, SAP Build Apps supports various integration methods, including RESTful APIs, web services, and other standard protocols. This flexibility enables developers to incorporate data and functionality from external systems into their applications. Q: Where can I deploy SAP Build Apps? (like app stores, etc.) A: You can package and deploy SAP Build Apps applications as mobile apps for Android and iOS platforms, making them accessible in exactly the same way as the other Android and iOS applications. Conclusion Assuming you have a reasonably licensed low-code platform that allows you to build applications quickly, has all the necessary features and integrations, and offers acceptable performance — you should use it as often as possible. The only exception is consumer applications where runtime performance and native device capabilities support is more important than time-to-market and development costs. Used in the right place, low-code development platforms can make a decisive contribution to company success. With Build, SAP is pursuing the idea of combining many services and solutions necessary for a unified developer experience. SAP Business Technology Platform
In the ever-evolving landscape of software deployment, GitOps has emerged as a game-changer, streamlining the journey from code to cloud. This article will explore GitOps using ArgoCD, a prominent GitOps operator, focusing on two repositories: the application repository gitops-apps-hello and the source of truth repository gitops-k8s-apps. We'll delve into setting up a workflow that integrates these repositories with ArgoCD for seamless deployment. Fork these repos and replace the references in the below article to experiment on your own. Understanding GitOps With ArgoCD GitOps is more than just a buzzword; it's a paradigm that leverages Git as the single source of truth for infrastructure and application configurations. Integrating GitOps with ArgoCD enhances the deployment process offering a robust solution for managing Kubernetes clusters. Key Benefits: Automated Deployments: Changes in the Git repository automatically trigger deployments. Improved Traceability: Every change is traceable through Git commits. Enhanced Security: Git's inherent security features bolster the deployment process. Setting Up a GitOps Workflow With ArgoCD To exploit the full potential of GitOps, let's set up a workflow using the specified repositories and ArgoCD. Pre-Requisites: A Kubernetes cluster. Access to the specified Git repositories. ArgoCD is installed on your Kubernetes cluster. Step 1: Install ArgoCD Install ArgoCD on your Kubernetes cluster. You can use the following command: Shell kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml Step 2: Access ArgoCD UI Once installed, access the ArgoCD UI by port-forwarding: Shell kubectl port-forward svc/argocd-server -n argocd 8080:443 Then, visit http://localhost:8080 in your browser. Step 3: Connect Repositories Connect the application repo and the source of the truth repo to ArgoCD: Shell curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64 sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd argocd login localhost:8080 --username admin --password <password> Shell argocd repo add https://github.com/brainupgrade-in/gitops-apps-hello argocd repo add https://github.com/brainupgrade-in/gitops-k8s-apps Step 4: Define an Application in ArgoCD Define an application in ArgoCD that points to the gitops-k8s-apps repository. This can be done either via the UI or the CLI. Here's an example using the CLI: Shell argocd app create argocd --repo https://github.com/brainupgrade-in/gitops-k8s-apps.git --path argocd/apps --dest-server https://kubernetes.default.svc --dest-namespace argocd Shell curl -o argocd-cm.yaml https://raw.githubusercontent.com/brainupgrade-in/gitops-k8s-apps/main/argocd/overlays/argocd-cm.yaml kubectl patch cm argocd-cm -n argocd --type strategic --patch-file argocd-cm.yaml curl -o argocd-repo-server-deploy.yaml https://raw.githubusercontent.com/brainupgrade-in/gitops-k8s-apps/main/argocd/overlays/argocd-repo-server-deploy.yaml kubectl patch deployment argocd-repo-server -n argocd --type strategic --patch-file argocd-repo-server-deploy.yaml Automating Deployments With ArgoCD, changes in the gitops-k8s-apps repository automatically triggers deployments in your Kubernetes cluster. ArgoCD continuously monitors this repository and ensures that the cluster's state matches the desired state defined in the repository. Example: Updating an Application Integrating Jenkins CI/CD into our GitOps workflow adds another layer of automation and control, especially when updating applications. Jenkins can be used to automate the process of building, testing, and deploying changes to our application repository, brainupgrade-in/gitops-apps-hello, and then updating our source of truth repository, brainupgrade-in/gitops-k8s-apps, which in turn triggers ArgoCD to deploy these changes to the Kubernetes cluster. Setting up Jenkins for Application Updates Jenkins Installation and Setup: First, ensure Jenkins is installed and properly configured. You should have a Jenkins server with the necessary plugins for Git and Kubernetes. Creating a Jenkins Pipeline: Create a new Jenkins pipeline that is triggered by changes in the gitops-apps-hello repository. This pipeline will handle the build and test processes for the application. Properties files pipeline { agent any stages { stage('Build') { steps { // Commands to build the application sh 'echo Building Application...' } } stage('Test') { steps { // Commands to test the application sh 'echo Running Tests...' } } stage('Update Source of Truth') { steps { // Updating the gitops-k8s-apps repository with the new application version script { // Fetch the new image tag or build number def newImageTag = 'my-app-image:1.1.0' // Example tag // Clone the source of truth repository sh 'git clone https://github.com/brainupgrade-in/gitops-k8s-apps.git' sh 'cd gitops-k8s-apps' // Update the Kubernetes manifests with the new image tag sh "sed -i 's|newTag: .*|newTag: ${newImageTag}|' ./hello/e2e/kustomization.yaml" // Commit and push the changes sh 'git commit -am "Update app version"' sh 'git push origin main' } } } } } Handling Repository Credentials: Ensure Jenkins has the necessary credentials to access both Git repositories. This can be set up in the Jenkins credentials store. Webhooks for Triggering Builds: Set up webhooks in the gitops-apps-hello repository to trigger the Jenkins pipeline automatically whenever there's a push to the repository. Pipeline Execution: When changes are pushed to the gitops-apps-hello repository, the Jenkins pipeline triggers. It builds and tests the application, and if these stages succeed, it proceeds to update the gitops-k8s-apps repository with the new application version. ArgoCD Deployment: Once Jenkins pushes the updated Kubernetes manifests to the gitops-k8s-apps repository, ArgoCD detects these changes. ArgoCD then synchronizes the changes, deploying the updated application version to the Kubernetes cluster. Benefits of Using Jenkins in the GitOps Workflow Automated Testing and Building: Jenkins automates the build and test phases, ensuring that only thoroughly tested applications are deployed. Traceability: Every change is logged and can be traced back through Jenkins builds, providing an audit trail. Flexibility: Jenkins pipelines can be customized to include additional stages like security scanning or integration testing. Efficiency: This integration streamlines the process from code change to deployment, reducing manual intervention and speeding up the release process. Integrating Jenkins into our GitOps workflow for updating applications adds a robust, automated pipeline that ensures reliability and efficiency in deployments. This combination of Jenkins for CI/CD and ArgoCD for GitOps offers a powerful toolset for modern cloud-native application deployment. Best Practices for GitOps With ArgoCD Immutable Infrastructure: Treat infrastructure as code; all changes should be through Git. Review and Approval Processes: Use pull requests and code reviews for changes in the Git repositories. Regular Monitoring: Keep an eye on the ArgoCD dashboard for the status of applications. Security Practices: Implement secure access controls and audit trails for both repositories. Conclusion Mastering GitOps with ArgoCD is a stepping stone towards efficient and reliable software deployment. By leveraging the robustness of Git and the automation capabilities of ArgoCD, we can achieve a seamless deployment process. This approach resonates with modern software development needs, ensuring a smoother and more controlled path from code to cloud. As we continue to innovate, embracing methodologies like GitOps will be pivotal in shaping efficient and secure software deployment landscapes.
Welcome back to the series where we have been building an application with Qwik that incorporates AI tooling from OpenAI. So far we’ve created a pretty cool app that uses AI to generate text and images. Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying Now, there’s just one more thing to do. It’s launch time! I’ll be deploying to Akamai‘s cloud computing services (formerly Linode), but these steps should work with any VPS provider. Let’s do this! Setup Runtime Adapter There are a couple of things we need to get out of the way first: deciding where we are going to run our app, what runtime it will run in, and how the deployment pipeline should look. As I mentioned before, I’ll be deploying to a VPS in Akamai’s connected cloud, but any other VPS should work. For the runtime, I’ll be using Node.js, and I’ll keep the deployment simple by using Git. Qwik is cool because it’s designed to run in multiple JavaScript runtimes. That’s handy, but it also means that our code isn’t ready to run in production as is. Qwik needs to be aware of its runtime environment, which we can do with adapters. We can access see and install available adapters with the command, npm run qwik add. This will prompt us with several options for adapters, integrations, and plugins. For my case, I’ll go down and select the Fastify adapter. It works well on a VPS running Node.js. You can select a different target if you prefer. Once you select your integration, the terminal will show you the changes it’s about to make and prompt you to confirm. You’ll see that it wants to modify some files, create some new ones, install dependencies, and add some new npm scripts. Make sure you’re comfortable with these changes before confirming. Once these changes are installed, your app will have what it needs to run in production. You can test this by building the production assets and running the serve command. (Note: For some reason, npm run build always hangs for me, so I run the client and server build scripts separately). npm run build.client & npm run build.server & npm run serve This will build out our production assets and start the production server listening for requests at http://localhost:3000. If all goes well, you should be able to open that URL in your browser and see your app there. It won’t actually work because it’s missing the OpenAI API keys, but we’ll sort that part out on the production server. Push Changes To Git Repo As mentioned above, this deployment process is going to be focused on simplicity, not automation. So rather than introducing more complex tooling like Docker containers or Kubernetes, we’ll stick to a simpler, but more manual process: using Git to deploy our code. I’ll assume you already have some familiarity with Git and a remote repo you can push to. If not, please go make one now. You’ll need to commit your changes and push it to your repo. git commit -am "ready to commit" & git push origin main Prepare Production Server If you already have a VPS ready, feel free to skip this section. I’ll be deploying to an Akamai VPS. I won’t walk through the step-by-step process for setting up a server, but in case you’re interested, I chose the Nanode 1 GB shared CPU plan for $5/month with the following specs: Operating system: Ubuntu 22.04 LTS Location: Seattle, WA CPU: 1 RAM: 1 GB Storage: 25 GB Transfer: 1 TB Choosing different specs shouldn’t make a difference when it comes to running your app, although some of the commands to install any dependencies may be different. If you’ve never done this before, then try to match what I have above. You can even use a different provider, as long as you’re deploying to a server to which you have SSH access. Once you have your server provisioned and running, you should have a public IP address that looks something like 172.100.100.200. You can log into the server from your terminal with the following command: ssh root@172.100.100.200 You’ll have to provide the root password if you have not already set up an authorized key. We’ll use Git as a convenient tool to get our code from our repo into our server, so that will need to be installed. But before we do that, I always recommend updating the existing software. We can do the update and installation with the following command. sudo apt update && sudo apt install git -y Our server also needs Node.js to run our app. We could install the binary directly, but I prefer to use a tool called NVM, which allows us to easily manage Node versions. We can install it with this command: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash Once NVM is installed, you can install the latest version of Node with: nvm install node Note that the terminal may say that NVM is not installed. If you exit the server and sign back in, it should work. Upload, Build, and Run App With our server set up, it’s time to get our code installed. With Git, it’s relatively easy. We can copy our code into our server using the clone command. You’ll want to use your own repo, but it should look something like this: git clone https://github.com/AustinGil/versus.git Our source code is now on the server, but it’s still not quite ready to run. We still need to install the NPM dependencies, build the production assets, and provide any environment variables. Let’s do it! First, navigate to the folder where you just cloned the project. I used: cd versus The install is easy enough: npm install The build command is: npm run build However, if you have any type-checking or linting errors, it will hang there. You can either fix the errors (which you probably should) or bypass them and build anyway with this: npm run build.client & npm run build.server The latest version of the project source code has working types if you want to check that. The last step is a bit tricky. As we saw above, environment variables will not be injected from the .env file when running the production app. Instead, we can provide them at runtime right before the serve command like this: OPENAI_API_KEY=your_api_key npm run serve You’ll want to provide your own API key there in order for the OpenAI requests to work. Also, for Node.js deployments, there’s an extra, necessary step. You must also set an ORIGIN variable assigned to the full URL where the app will be running. Qwik needs this information to properly configure their CSRF protection. If you don’t know the URL, you can disable this feature in the /src/entry.preview.tsx file by setting the createQwikCity options property checkOrigin to false: export default createQwikCity({ render, qwikCityPlan, checkOrigin: false }); This process is outlined in more detail in the docs, but it’s recommended not to disable, as CSRF can be quite dangerous. And anyway, you’ll need a URL to deploy the app anyway, so better to just set the ORIGIN environment variable. Note that if you make this change, you’ll want to redeploy and rerun the build and serve commands. If everything is configured correctly and running, you should start seeing the logs from Fastify in the terminal, confirming that the app is up and running. {"level":30,"time":1703810454465,"pid":23834,"hostname":"localhost","msg":"Server listening at http://[::1]:3000"} Unfortunately, accessing the app via IP address and port number doesn’t show the app (at least not for me). This is likely a networking issue, but also something that will be solved in the next section, where we run our app at the root domain. The Missing Steps Technically, the app is deployed, built, and running, but in my opinion, there is a lot to be desired before we can call it “production-ready.” Some tutorials would assume you know how to do the rest, but I don’t want to do you like that. We’re going to cover: Running the app in background mode Restarting the app if the server crashes Accessing the app at the root domain Setting up an SSL certificate One thing you will need to do for yourself is buy the domain name. There are lots of good places. I’ve been a fan of Porkbun and Namesilo. I don’t think there’s a huge difference for which registrar you use, but I like these because they offer WHOIS privacy and email forwarding at no extra charge on top of their already low prices. Before we do anything else on the server, it’ll be a good idea to point your domain name’s A record (@) to the server’s IP address. Doing this sooner can help with propagation times. Now, back in the server, there’s one glaring issue we need to deal with first. When we run the npm run serve command, our app will run as long as we keep the terminal open. Obviously, it would be nice to exit out of the server, close our terminal, and walk away from our computer to go eat pizza without the app crashing. So we’ll want to run that command in the background. There are plenty of ways to accomplish this: Docker, Kubernetes, Pulumis, etc., but I don’t like to add too much complexity. So for a basic app, I like to use PM2, a Node.js process manager with great features, including the ability to run our app in the background. From inside your server, run this command to install PM2 as a global NPM module: npm install -g pm2 Once it’s installed, we can tell PM2 what command to run with the “start” command: pm2 start "npm run serve" PM2 has a lot of really nice features in addition to running our apps in the background. One thing you’ll want to be aware of is the command to view logs from your app: pm2 logs In addition to running our app in the background, PM2 can also be configured to start or restart any process if the server crashes. This is super helpful to avoid downtime. You can set that up with this command: pm2 startup Ok, our app is now running and will continue to run after a server restart. Great! But we still can’t get to it. Lol! My preferred solution is using Caddy. This will resolve the networking issues, work as a great reverse proxy, and take care of the whole SSL process for us. We can follow the install instructions from their documentation and run these five commands: sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy Once that’s done, you can go to your server’s IP address and you should see the default Caddy welcome page: Progress! In addition to showing us something is working, this page also gives us some handy information on how to work with Caddy. Ideally, you’ve already pointed your domain name to the server’s IP address. Next, we’ll want to modify the Caddyfile: sudo nano /etc/caddy/Caddyfile As their instructions suggest, we’ll want to replace the :80 line with our domain (or subdomain), but instead of uploading static files or changing the site root, I want to remove (or comment out) the root line and enable the reverse_proxy line, pointing the reverse proxy to my Node.js app running at port 3000. versus.austingil.com { reverse_proxy localhost:3000 } After saving the file and reloading Caddy (systemctl reload caddy), the new Caddyfile changes should take effect. Note that it may take a few moments before the app is fully up and running. This is because one of Caddy’s features is to provision a new SSL certificate for the domain. It also sets up the automatic redirect from HTTP to HTTPS. So now if you go to your domain (or subdomain), you should be redirected to the HTTPS version running a reverse proxy in front of your generative AI application which is resilient to server crashes. How awesome is that!? Using PM2 we can also enable some load-balancing in case you’re running a server with multiple cores. The full PM2 command including environment variables and load-balancing might look something like this: OPENAI_API_KEY=your_api_key ORIGIN=example.com pm2 start "npm run serve" -i max Note that you may need to remove the current instance from PM2 and rerun the start command, you don’t have to restart the Caddy process unless you change the Caddy file, and any changes to the Node.js source code will require a rebuild before running it again. Hell Yeah! We Did It! Alright, that’s it for this blog post and this series. I sincerely hope you enjoyed both and learned some cool things. Today, we covered a lot of things you need to know to deploy an AI-powered application: Runtime adapters Building for production Environment variables Process managers Reverse-proxies SSL certificates If you missed any of the previous posts, be sure to go back and check them out. I’d love to know what you thought about the whole series. If you want, you can play with the app I built. Let me know if you deployed your own app. Also, if you have ideas for topics you’d like me to discuss in the future I’d love to hear them :) UPDATE: If you liked this project and are curious to see what it might look like as a SvelteKit app, check out this blog post by Tim Smith where he converts this existing app over. Thank you so much for reading.
In the fast-evolving landscape of the Internet of Things (IoT), edge computing has emerged as a critical component. By processing data closer to where it's generated, edge computing offers enhanced speed and reduced latency, making it indispensable for IoT applications. However, developing and deploying IoT solutions that leverage edge computing can be complex and challenging. Agile methodologies, known for their flexibility and efficiency, can play a pivotal role in streamlining this process. This article explores how Agile practices can be adapted for IoT projects utilizing edge computing in conjunction with cloud computing, focusing on optimizing the rapid development and deployment cycle. Agile in IoT Agile methodologies, with their iterative and incremental approach, are well-suited for the dynamic nature of IoT projects. They allow for continuous adaptation to changing requirements and rapid problem-solving, which is crucial in the IoT landscape where technologies and user needs evolve quickly. Key Agile Practices for IoT and Edge Computing In the realm of IoT and edge computing, the dynamic and often unpredictable nature of projects necessitates an approach that is both flexible and robust. Agile methodologies stand out as a beacon in this landscape, offering a framework that can adapt to rapid changes and technological advancements. By embracing key Agile practices, developers and project managers can navigate the complexities of IoT and edge computing with greater ease and precision. These practices, ranging from adaptive planning and evolutionary development to early delivery and continuous improvement, are tailored to meet the unique demands of IoT projects. They facilitate efficient handling of high volumes of data, security concerns, and the integration of new technologies at the edge of networks. In this context, the right tools and techniques become invaluable allies, empowering teams to deliver high-quality, innovative solutions in a timely and cost-effective manner. Scrum Framework with IoT-Specific Modifications Tools: JIRA, Asana, Microsoft Azure DevOps JIRA: Customizable Scrum boards to track IoT project sprints, with features to link user stories to specific IoT edge development tasks. Asana: Task management with timelines that align with sprint goals, particularly useful for tracking the progress of edge device development. Microsoft Azure DevOps: Integrated with Azure IoT tools, it supports backlog management and sprint planning, crucial for IoT projects interfacing with Azure IoT Edge. Kanban for Continuous Flow in Edge Computing Tools: Trello, Kanbanize, LeanKit Trello: Visual boards to manage workflow of IoT edge computing tasks, with power-ups for automation and integration with development tools. Kanbanize: Advanced analytics and flow metrics to monitor the progress of IoT tasks, particularly useful for continuous delivery in edge computing. LeanKit: Provides a holistic view of work items and allows for easy identification of bottlenecks in the development process of IoT systems. Continuous Integration/Continuous Deployment (CI/CD) for IoT Edge Applications Tools: Jenkins, GitLab CI/CD, CircleCI Jenkins With IoT Plugins: Automate building, testing, and deploying for IoT applications. Plugins can be used for specific IoT protocols and edge devices. GitLab CI/CD: Provides a comprehensive DevOps solution with built-in CI/CD, perfect for managing source code, testing, and deployment of IoT applications. CircleCI: Efficient for automating CI/CD pipelines in cloud environments, which can be integrated with edge computing services. Test-Driven Development (TDD) for Edge Device Software Tools: Selenium, Cucumber, JUnit Selenium: Automated testing for web interfaces of IoT applications. Useful for testing user interfaces on management dashboards of edge devices. Cucumber: Supports behavior-driven development (BDD), beneficial for defining test cases in plain language for IoT applications. JUnit: Essential for unit testing in Java-based IoT applications, ensuring that individual components work as expected. Agile Release Planning with Emphasis on Edge Constraints Tools: Aha!, ProductPlan, Roadmunk Aha!: Roadmapping tool that aligns release plans with strategic goals, especially useful for long-term IoT edge computing projects. ProductPlan: For visually mapping out release timelines and dependencies, critical for synchronizing edge computing components with cloud infrastructure. Roadmunk: Helps visualize and communicate the roadmap of IoT product development, including milestones for edge technology integration. Leveraging Tools and Technologies Development and Testing Tools Docker and Kubernetes: These tools are essential for containerization and orchestration, enabling consistent deployment across various environments, which is crucial for edge computing applications. Example - In the manufacturing sector, Docker and Kubernetes are pivotal in deploying and managing containerized applications across the factory floor. For instance, a car manufacturer can use these tools for deploying real-time analytics applications on the assembly line, ensuring consistent performance across various environments. GitLab CI/CD: Offers a single application for the entire DevOps lifecycle, streamlining the CI/CD pipeline for IoT projects. Example - Retailers use GitLab CI/CD to automate the testing and deployment of IoT applications in stores. This automation is crucial for applications like inventory tracking systems, where real-time data is essential for maintaining stock levels efficiently. JIRA and Trello: For Agile project management, providing transparency and efficient tracking of progress. Example - Smart city initiatives utilize JIRA and Trello to manage complex IoT projects like traffic management systems and public safety networks. These tools aid in tracking progress and coordinating tasks across multiple teams. Edge-Specific Technologies Azure IoT Edge: This service allows cloud intelligence to be deployed locally on IoT devices. It’s instrumental in running AI, analytics, and custom logic on edge devices. Example - Healthcare providers use Azure IoT Edge for deploying AI and analytics close to patient monitoring devices. This approach enables real-time health data analysis, crucial for critical care units where immediate data processing can save lives. AWS Greengrass: Seamlessly extends AWS to edge devices, allowing them to act locally on the data they generate while still using the cloud for management, analytics, and storage. Example - In agriculture, AWS Greengrass facilitates edge computing in remote locations. Farmers deploy IoT sensors for soil and crop monitoring. These sensors, using AWS Greengrass, can process data locally, making immediate decisions about irrigation and fertilization, even with limited internet connectivity. FogHorn Lightning™ Edge AI Platform: A powerful tool for edge intelligence, it enables complex processing and AI capabilities on IoT devices. Example - The energy sector, particularly renewable energy, uses FogHorn’s Lightning™ Edge AI Platform for real-time analytics on wind turbines and solar panels. The platform processes data directly on the devices, optimizing energy output based on immediate environmental conditions. Challenges and Solutions Managing Security: Edge computing introduces new security challenges. Agile teams must incorporate security practices into every phase of the development cycle. Tools like Fortify and SonarQube can be integrated into the CI/CD pipeline for continuous security testing. Ensuring Scalability: IoT applications must be scalable. Leveraging microservices architecture can address this. Tools like Docker Swarm and Kubernetes aid in managing microservices efficiently. Data Management and Analytics: Efficient data management is critical. Apache Kafka and RabbitMQ are excellent for data streaming and message queuing. For analytics, Elasticsearch and Kibana provide real-time insights. Conclusion The application and adoption of Agile methodologies in edge computing for IoT projects represent both a technological shift and a strategic imperative across various industries. This fusion is not just beneficial but increasingly necessary, as it facilitates rapid development, deployment, and the realization of robust, scalable, and secure IoT solutions. Spanning sectors from manufacturing to healthcare, retail, and smart cities, the convergence of Agile practices with edge computing is paving the way for more responsive, efficient, and intelligent solutions. This integration, augmented by cutting-edge tools and technologies, is enabling organizations to maintain a competitive edge in the IoT landscape. As the IoT sector continues to expand, the amalgamation of Agile methodologies, edge computing, and IoT is set to drive innovation and efficiency to new heights, redefining the boundaries of digital transformation and shaping the future of technological advancement.
In the dynamic realm of Android app development, efficiency is key. Enter Azure DevOps, Microsoft's integrated solution that transforms the development lifecycle. This tutorial will show you how to leverage Azure DevOps for seamless Android app development. What Is Azure DevOps? Azure DevOps is not just a version control system; it's a comprehensive set of development and deployment tools that seamlessly integrate with popular platforms and technologies. From version control (Azure Repos) to continuous integration and delivery (Azure Pipelines), and even application monitoring (Azure Application Insights), Azure DevOps offers a unified environment to manage your entire development cycle. This unified approach significantly enhances collaboration, accelerates time-to-market, and ensures a more reliable and scalable deployment of your Android applications. Azure DevOps is a game-changer in the development of feature-rich Android mobile applications, offering a unified platform for version control, continuous integration, and automated testing. With Azure Pipelines, you can seamlessly orchestrate the entire build and release process, ensuring that changes from each team member integrate smoothly. The integrated nature of Azure DevOps promotes collaboration, accelerates the development cycle, and provides robust tools for monitoring and troubleshooting. This unified approach not only helps meet tight deadlines but also ensures a reliable and scalable deployment of the Android application, enhancing the overall efficiency and success of the project. Use the azure-pipelines.yml file at the root of the repository. Get this file to build the Android application using a CI (Continuous Integration) build. Follow the instructions in the previously linked article, "Introduction to Azure DevOps," to create a build pipeline for an Android application. After creating a new build pipeline, you will be prompted to choose a repository. Select the GitHub/Azure Repository. You then need to authorize the Azure DevOps service to connect to the GitHub account. Click Authorize, and this will integrate with your build pipeline. After the connection to GitHub has been authorized, select the right repo, which is used to build the application. How To Build an Android Application With Azure Step 1: Get a Fresh Virtual Machine Azure Pipelines have the option to build and deploy using a Microsoft-hosted agent. When running a build or release pipeline, get a fresh virtual machine (VM). If Microsoft-hosted agents will not work, use a self-hosted agent, as it will act as a build host. pool: name: Hosted VS2017 demands: java Step 2: Build a Mobile Application Build a mobile application using a Gradle wrapper script. Check out the branch and repository of the gradlew wrapper script. The gradlew wrapper script is used for the build. If the agent is running on Windows, it must use the gradlew.bat; if the agent runs on Linux or macOS, it can use the gradlew shell script. Step 3: Set Directories Set the current working directory and Gradle WrapperFile script directory. steps: - task: Gradle@2 displayName: 'gradlew assembleDebug' inputs: gradleWrapperFile: 'MobileApp/SourceCode -Android/gradlew' workingDirectory: 'MobileApp/SourceCode -Android' tasks: assembleDebug publishJUnitResults: false checkStyleRunAnalysis: true findBugsRunAnalysis: true pmdRunAnalysis: true This task detects all open source components in your build, security vulnerabilities, scan libraries, and outdated libraries (including dependencies from the source code). You can view it from the build level, project level, and account level. task: whitesource.ws-bolt.bolt.wss.WhiteSource Bolt@18 displayName: 'WhiteSource Bolt' inputs: cwd: 'MobileApp/SourceCode -Android' Step 4: Copy Files Copy the .apk file from the source to the artifact directory. - task: CopyFiles@2 displayName: 'Copy Files to: $(build.artifactStagingDirectory)' inputs: SourceFolder: 'MobileApp/SourceCode -Android' Contents: '**/*.apk' TargetFolder: '$(build.artifactStagingDirectory)' Use this task in the build pipeline to publish the build artifacts to Azure pipelines and file share.it will store it in the Azure DevOps server. - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: drop' The new pipeline wizard should recognize that you already have an azure-pipelines.yml in the root repository. The azure-pipeline.yml file contains all the settings that the build service should use to build and test the application, as well as generate the output artifacts that will be used to deploy the app's later release pipeline(CD). Step 5: Save and Queue the Build After everything is perfect, save and queue the build so you can see the corresponding task of logs to the respective job. Step 6: Extract the Artifact Zip Folder After everything is done, extract the artifact zip folder, copy the .apk file into the mobile device, and install the .apk file. Conclusion Azure DevOps is a game-changer for Android app development, streamlining processes and boosting collaboration. Encompassing version control, continuous integration, and automated testing, this unified solution accelerates development cycles and ensures the reliability and scalability of Android applications. The tutorial has guided you through the process of building and deploying an Android mobile application using Azure DevOps. By following these steps, you've gained the skills to efficiently deploy Android applications, meet tight deadlines, and ensure reliability. Whether you're optimizing your workflow or entering Android development, integrating Azure DevOps will significantly enhance your efficiency and project success.
This article will demonstrate how to build a complete CI/CD pipeline in Visual Studio and deploy it to Azure using the new Continuous Delivery Extension for Visual Studio. Using CI allows you to merge the code changes in order to ensure that those changes work with the existing code base and allows you to perform testing. On the other hand, using CD, you are repeatedly pushing code through a deployment pipeline where it is built, tested, and deployed afterward. This CI/CD team practice automates the build, testing, and deployment of your application, and allows complete traceability in order to see code changes, reviews, and test results. What Is Visual Studio? Visual Studio is a powerful Integrated Development Environment (IDE). This feature-rich IDE has a robust environment for coding, debugging, and building applications. Azure DevOps (previously VS Team Services) has a comprehensive collection of collaboration tools and extensions that closely integrates the CI/CD pipeline of the Visual Studio environment. The CI (Continuous Integration) updates any code changes to the existing code base while CD (Continuous Deployment) pushes it through the deployment pipeline to build, test, and deploy further. The Visual Studio with CI/CD extensions thus automates the build, deployment, and testing process of software development. Not only that, it allows complete traceability in order to see code changes, reviews, and test results. The quality of software is largely dependent on the process applied to develop it. The automated system of The CI/CD practices is focused on this goal through continuous delivery and deployment. Consequently, this not only ensures software quality but also enhances the security and profitability of the production. This also shortens the production time to include new features, creating happy customers with low stress on development. In order to create a CI build, a release pipeline, and Release Management that is going to deploy the code into Azure, all you need is an existing web-based application and an extension from the marketplace. DZone’s previously covered how to build a CI/CD pipeline from scratch. How To Build a CI/CD Pipeline With Visual Studio Step1: Enable the Continuous Delivery Extension for Visual Studio In order to use the Continuous Delivery Tools for Visual Studio extension, you just need to enable it. The Continuous Delivery Tools for Visual Studio extension makes it simple to automate and stay up to date on your DevOps pipeline for other projects targeting Azure. The tools also allow you to improve your code quality and security. Go to Tools, and choose Extensions and Updates. From the prompted window, select Continuous Delivery Tools for Visual Studio and click Enable. *If you don't have Continuous Delivery Tools installed, go to Online Visual Studio Marketplace, search for "Continuous" and download it. Step 2: Create a Project in Team Services In this step, you are going to create a project in Team Services and put your project code there without leaving your IDE. Team Services is a tool that allows you to build Continuous Integration and Continuous Delivery. Go into the Solution Explorer, and right-click on your web-based project. Click on the new context menu Configure Continuous Delivery. A new window is displayed Configure Continuous Delivery. Click on the Add this project to source control plus button. Click on the Publish Git Repo button located in the Publish to Visual Studio Team Services section in Team Explorer. Your Microsoft Account is automatically fetched from your IDE. Also is displayed the Team Services Domain which will be used and your Repository Name. Click on the Publish Repository button in order to create a project in Team Services. After the synchronization is finished you will see that your project is created in the Team Explorer. Now your project is created into the Team Services account (the source code is uploaded, there is a Git Repository and it is generating a continuous delivery pipeline automatically). 7. In the output window, you can see that your CI/CD is set up for your project. 8. After a while, you are going to get 3 different links: Link to the build Link to the release Link to the assets created in Azure which is going to be the target for your deployment (application service) Step 3: Open the Project in Team Services A build definition is the entity through which you define your automated build process. In the build definition, you compose a set of tasks, each of which performs a step in your build. Choose the Build Definition link provided in the Output window and copy. Paste it into a browser in order to open the project containing your application in Team Services. The summary for the build definition is displayed. You can see that the build is already running. Click on the build link. It is shown as an output of your build server which is running your build automatically. Click on the Edit build definition. Add an additional task. Customize the tasks that are already there. Step 4: Test Assemblies Task Each task has a Version selector that enables you to specify the major version of the task used in your build or deployment. When a new minor version is released (for example, 1.2 to 1.3), your build or release will automatically use the new version. However, if a new major version is released (for example, 2.0), your build or release will continue to use the major version you specified until you edit the definition and manually change to the new major version. Click on the Test Assemblies. You can see a little flag icon which means that a new preview version of this task is available. Click on the Flag Icon and choose version 2* in order to preview. There are several new items shown for the Test Assemblies. One of them is Run only impacted tests. This is an item that allows tools to analyze which lines of code were changed against the tests that were run in the past and you will know which tests execute which lines of code (you will not have to run all of your tests: you are able to run only the tests that were impacted by the changes). Run tests in parallel on multi-core machines is an item that allows your tests to run in such a way as to use all the cores you have available. Using this item you will effectively increase the number of tests running at the same time, which will reduce the time to run all the tests. Step 5: Add an Additional Task A task is the building block for defining automation in a build definition, or in an environment of a release definition. A task is simply a packaged script or procedure that has been abstracted with a set of inputs. There are some built-in tasks in order to enable fundamental build and deployment scenarios. Click on the Add Task plus button in order to create a new additional task. An enormous list of tasks is displayed that can be run out of the box allowing you to target any language/platform (Chef support, CocoaPods, Docker, Node.js, Java). If you want to install another feature or extension that is not listed, simply click on the link Check out our Marketplace which is displayed above the list of tasks. Step 6: Setting Encrypted and Non-Encrypted Variables Variables are a great way to store and share key bits of data in your build definition. Some build templates automatically define some variables for you. Go and click on the second tab named Variables (next to the tab Tasks). Click on the padlock located next to the variable value, in order to encrypt it. After encrypting, the value of the variable is displayed with asterisks, and no one can see this value except the person who encrypted it. Step 7: Turn on the Continuous Integration (CI) Trigger On the Triggers tab, you specify the events that will trigger the build. You can use the same build definition for both CI and scheduled builds. Go and click on the third tab named Triggers, where you can set up your Continuous Integration. Enabling the box Disable this trigger means that this build will run automatically whenever someone checks in code or, in other words, when a new version of the source artifacts is available. Step 8: Build Definition Options If the build process fails, you can automatically create a work item to track getting the problem fixed. You can specify the work item type. You can also select if you want to assign the work item to the requestor. For example, if this is a CI build, and a team member checks in some code that breaks the build, then the work item is assigned to that person. Go and click on the fourth tab named Options. Enable the box Create Work Item on Failure. CI builds are supposed to build at every check-in, and if some of them fail because the developer made an error, you can automatically create a work item in order to track getting the problem fixed. Default agent queue option is displayed in the second half of the Options. In the drop-down list are all available pools: Default (if your team uses private agents set up by your own) Hosted (Windows-based machine, if your team uses VS2017 or VS2015) Hosted Linux Preview (if your team uses development tools on Ubuntu) Hosted VS2017 (if your team uses Visual Studio 2017) Step 9: Build Summary You can see the summary of the build - in other words, everything that happened during the build - following the next steps: Code coverage All work items and tasks Deployments Step 10: Release Definition A release definition is one of the fundamental concepts in Release Management for VSTS and TFS. It defines the end-to-end release process for an application to be deployed across various environments. Remember that you as a developer, never have to leave VS in order to deploy the application from VS into Azure. A release definition is displayed that deployed the code into Azure. Click on the three dots located next to the particular release definition. From the displayed context menu, select Edit. Series of environments Tasks that you want to perform in each environment Step 11: Check if the Application Is Really Deployed From Visual Studio Into Azure Microsoft Azure is a cloud computing service for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centers. In this step you will verify if your web application is deployed in Azure, following the next steps: Go to your Azure portal. Click on the Resource Groups. Search for the "demo." Click In the search results on your web project "e2edemo." Open the web application link. Further Reading: Release pipeline using Azure DevOps. Conclusion Continuous Integration is a software development practice in which you build and test software every time a developer pushes code to the application. Continuous Delivery is a software engineering approach in which Continuous Integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably, and repeatedly with minimal human intervention. High-performing teams usually practice Continuous Integration (CI) and Continuous Delivery (CD). VSTS not only automates the build, testing, and deployment of your application, but it gives you complete traceability to see everything in the build including changes to your code, reviews, and test results, as a tool which is fully supporting DevOps practices.
In this brief demonstration, we’ll set up and run three instances of WildFly on the same machine (localhost). Together they will form a cluster. It’s a rather classic setup, where the appservers need to synchronize the content of their application’s session to ensure failover if one of the instances fails. This configuration guarantees that, if one instance fails while processing a request, another one can pick up the work without any data loss. Note that we’ll use a multicast to discover the members of the cluster and ensure that the cluster’s formation is fully automated and dynamic. Install Ansible and Its Collection for WildFly On a Linux system using a package manager, installing Ansible is pretty straightforward: Shell sudo dnf install ansible-core Please refer to the documentation available online for installation on other operating systems. Note that this demonstration assumes you are running both the Ansible controller and the target (same machine in our case) on a Linux system. However, it should work on any other operating system with a few adjustments. Before going further, double-check that you are running a recent enough version of Ansible (2.14 or above will do, but 2.9 is the bare minimum): Shell ansible [core 2.15.3] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.11/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.11.2 (main, Jun 6 2023, 07:39:01) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/usr/bin/python3.11) jinja version = 3.1.2 libyaml = True The next and last step to ready your Ansible environment is to install the Ansible collection for WildFly on the controller (the machine that will run Ansible): Shell # ansible-galaxy collection install middleware_automation.wildfly Starting galaxy collection install process Process install dependency map Starting collection install process Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/middleware_automation-wildfly-1.4.3.tar.gz to /root/.ansible/tmp/ansible-local-355dkk9kf5/tmpc2qtag11/middleware_automation-wildfly-1.4.3-9propr_x Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/ansible-posix-1.5.4.tar.gz to /root/.ansible/tmp/ansible-local-355dkk9kf5/tmpc2qtag11/ansible-posix-1.5.4-pq0cq2mn Installing 'middleware_automation.wildfly:1.4.3' to '/root/.ansible/collections/ansible_collections/middleware_automation/wildfly' middleware_automation.wildfly:1.4.3 was installed successfully Installing 'ansible.posix:1.5.4' to '/root/.ansible/collections/ansible_collections/ansible/posix' Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/middleware_automation-common-1.1.4.tar.gz to /root/.ansible/tmp/ansible-local-355dkk9kf5/tmpc2qtag11/middleware_automation-common-1.1.4-nks7pvy7 ansible.posix:1.5.4 was installed successfully Installing 'middleware_automation.common:1.1.4' to '/root/.ansible/collections/ansible_collections/middleware_automation/common' middleware_automation.common:1.1.4 was installed successfully Set up the WildFly Cluster For simplicity’s sake and to allow you to reproduce this demonstration on a single machine (physical or virtual) or even a container, we opted to deploy our three instances on one target. We chose localhost as a target, so that the demonstration can even be performed without a remote host. There are essentially two steps to set up the WildFly cluster: Install WildFly on the targeted hosts (here, just localhost). This means downloading the archive from this website and decompressing the archive in the appropriate directory (JBOSS_HOME). These tasks are handled by the wildfly_install role supplied by Ansible collection for WildFly. Create the configuration files to run several instances of WildFly. Because we’re running multiple instances on a single host, you also need to ensure that each instance has its own subdirectories and set of ports, so that the instances can coexist and communicate. Fortunately, this functionality is provided by a role within the Ansible collection called wildfly_systemd. Ansible Playbook To Install WildFly Here is the playbook we’ll use to deploy our clusters. Its content is relatively self-explanatory, at least if you are somewhat familiar with the Ansible syntax. YAML - name: "WildFly installation and configuration" hosts: "{{ hosts_group_name | default('localhost') }" become: yes vars: wildfly_install_workdir: '/opt/' wildfly_config_base: standalone-ha.xml wildfly_version: 30.0.1.Final wildfly_java_package_name: java-11-openjdk-headless.x86_64 wildfly_home: "/opt/wildfly-{{ wildfly_version }" instance_http_ports: - 8080 - 8180 - 8280 app: name: 'info-1.2.war' url: 'https://drive.google.com/uc?export=download&id=13K7RCqccgH4zAU1RfOjYMehNaHB0A3Iq' collections: - middleware_automation.wildfly roles: - role: wildfly_install tasks: - name: "Set up for WildFly instance {{ item }." ansible.builtin.include_role: name: wildfly_systemd vars: wildfly_config_base: 'standalone-ha.xml' wildfly_instance_id: "{{ item }" instance_name: "wildfly-{{ wildfly_instance_id }" wildfly_config_name: "{{ instance_name }.xml" wildfly_basedir_prefix: "/opt/{{ instance_name }" service_systemd_env_file: "/etc/wildfly-{{ item }.conf" service_systemd_conf_file: "/usr/lib/systemd/system/wildfly-{{ item }.service" loop: "{{ range(0,3) | list }" - name: "Wait for each instance HTTP ports to become available." ansible.builtin.wait_for: port: "{{ item }" loop: "{{ instance_http_ports }" - name: "Checks that WildFly server is running and accessible." ansible.builtin.get_url: url: "http://localhost:{{ port }/" dest: "/opt/{{ port }" loop: "{{ instance_http_ports }" loop_control: loop_var: port In short, this playbook first uses the Ansible collection for WildFly to install the appserver by using the wildfly_install role. This will download all the artifacts, create the required system groups and users, install dependency (unzip), and so on. At the end of its execution, all the tidbits required to run WildFly on the target host are installed, but the server is not yet running. That’s what happening in the next step. In the tasks section of the playbook, we then call on another role provided by the collection: wildfly_systemd. This role will take care of integrating WildFly as a regular system service into the service manager. Here, we use a loop to ensure that we create not one, but three different services. Each one will have the same configuration (standalone-ha.xml) but run on different ports, using a different set of directories to store its data. Run the Playbook! Now, let’s run our Ansible playbook and observe its output: Shell $ ansible-playbook -i inventory playbook.yml PLAY [WildFly installation and configuration] ********************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure prerequirements are fullfilled.] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/prereqs.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Validate credentials] **** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate existing zipfiles wildfly-30.0.1.Final.zip for offline installs] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate patch version for offline installs] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate existing additional zipfiles {{ eap_archive_filename } for offline installs] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check that required packages list has been provided.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Prepare packages list] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Add JDK package java-11-openjdk-headless.x86_64 to packages list] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Install required packages (5)] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure required local user exists.] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/user.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure workdir /opt/ exists.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure archive_dir /opt/ exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure server is installed] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/install.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check local download archive path] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set download paths] ****** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check target archive: /opt//wildfly-30.0.1.Final.zip] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Retrieve archive from website: https://github.com/wildfly/wildfly/releases/download] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/install/web.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Download zipfile from https://github.com/wildfly/wildfly/releases/download/30.0.1.Final/wildfly-30.0.1.Final.zip into /work/wildfly-30.0.1.Final.zip] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Retrieve archive from RHN] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Install server using RPM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check downloaded archive] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Copy archive to target nodes] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check target archive: /opt//wildfly-30.0.1.Final.zip] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Verify target archive state: /opt//wildfly-30.0.1.Final.zip] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Read target directory information: /opt/wildfly-30.0.1.Final] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Extract files from /opt//wildfly-30.0.1.Final.zip into /opt/.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Note: decompression was not executed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Read information on server home directory: /opt/wildfly-30.0.1.Final] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check state of server home directory: /opt/wildfly-30.0.1.Final] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Deploy configuration] **** changed: [localhost] TASK [Apply latest cumulative patch] ******************************************* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure required parameters for elytron adapter are provided.] *** skipping: [localhost] TASK [Install elytron adapter] ************************************************* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Install server using Prospero] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check wildfly install directory state] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate conditions] ***** ok: [localhost] TASK [Ensure firewalld configuration allows server port (if enabled).] ********* skipping: [localhost] TASK [Set up for WildFly instance {{ item }.] ********************************* TASK [middleware_automation.wildfly.wildfly_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check current EAP patch installed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments for yaml configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in WildFly] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in EAP] *** skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set destination directory for configuration] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance destination directory for configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set base directory for instance] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set bind address] ******** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create basedir /opt/wildfly-00 for instance: wildfly-0] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create deployment directories for instance: wildfly-0] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy configuration] **** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Include YAML configuration extension] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check YAML configuration is disabled] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd envfile destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd unit file destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy service instance configuration: /etc/wildfly-0.conf] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/wildfly-0.service] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Ensure service is started] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_systemd/tasks/service.yml for localhost TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance wildfly-0 state to started] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check current EAP patch installed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments for yaml configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in WildFly] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in EAP] *** skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set destination directory for configuration] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance destination directory for configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set base directory for instance] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set bind address] ******** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create basedir /opt/wildfly-11 for instance: wildfly-1] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create deployment directories for instance: wildfly-1] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy configuration] **** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Include YAML configuration extension] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check YAML configuration is disabled] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd envfile destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd unit file destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy service instance configuration: /etc/wildfly-1.conf] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/wildfly-1.service] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Ensure service is started] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_systemd/tasks/service.yml for localhost TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance wildfly-1 state to started] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check current EAP patch installed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments for yaml configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in WildFly] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in EAP] *** skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set destination directory for configuration] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance destination directory for configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set base directory for instance] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set bind address] ******** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create basedir /opt/wildfly-22 for instance: wildfly-2] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create deployment directories for instance: wildfly-2] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy configuration] **** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Include YAML configuration extension] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check YAML configuration is disabled] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd envfile destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd unit file destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy service instance configuration: /etc/wildfly-2.conf] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/wildfly-2.service] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Ensure service is started] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_systemd/tasks/service.yml for localhost TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance wildfly-2 state to started] *** changed: [localhost] TASK [Wait for each instance HTTP ports to become available.] ****************** ok: [localhost] => (item=8080) ok: [localhost] => (item=8180) ok: [localhost] => (item=8280) TASK [Checks that WildFly server is running and accessible.] ******************* changed: [localhost] => (item=8080) changed: [localhost] => (item=8180) changed: [localhost] => (item=8280) PLAY RECAP ********************************************************************* localhost : ok=105 changed=26 unreachable=0 failed=0 skipped=46 rescued=0 ignored=0 Note that the playbook is not that long, but it does a lot for us. It performs almost 100 different tasks, starting by automatically installing the dependencies, including the JVM required by WildFly, along with downloading its binaries. The wildfly_systemd role does even more, effortlessly setting up three distinct services, each with its own set of ports and directory layout to store instance-specific data. Even better, the WildFly installation is NOT duplicated. All of the binaries live under the /opt/wildfly-27.0.1 directory, but all the data files of each instance are stored in separate folders. This means that we just need to update the binaries once, and then restart the instances to deploy a patch or upgrade to a new version of WildFly. On top of everything, we configured the instances to use the standalone-ha.xml configuration as the baseline, so they are already set up for clustering. Check That Everything Works as Expected The easiest way to confirm that the playbook did indeed install WildFly and start three instances of the appserver is to use the systemctl command to check the associate services state: Shell # systemctl status wildfly-0 ● wildfly-0.service - JBoss EAP (standalone mode) Loaded: loaded (/usr/lib/systemd/system/wildfly-0.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2024-01-18 07:01:44 UTC; 5min ago Main PID: 884 (standalone.sh) Tasks: 89 (limit: 1638) Memory: 456.3M CGroup: /system.slice/wildfly-0.service ├─ 884 /bin/sh /opt/wildfly-30.0.1.Final/bin/standalone.sh -c wildfly-0.xml -b 0.0.0.0 -bmanagement 127.0.0.1 -Djboss.bind.address.private=127.0.0.1 -Djboss.default.multicast.address=230.0.0.4 -Djboss.server.config.dir=/opt/wildfly-30.0.1.Final/standalone/configuration/ -Djboss.server.base.dir=/opt/wildfly-00 -Djboss.tx.node.id=wildfly-0 -Djboss.socket.binding.port-offset=0 -Djboss.node.name=wildfly-0 -Dwildfly.statistics-enabled=false └─1044 /etc/alternatives/jre_11/bin/java -D[Standalone] -Djdk.serialFilter=maxbytes=10485760;maxdepth=128;maxarray=100000;maxrefs=300000 -Xmx1024M -Xms512M --add-exports=java.desktop/sun.awt=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.url.ldap=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.url.ldaps=ALL-UNNAMED --add-exports=jdk.naming.dns/com.sun.jndi.dns=ALL-UNNAMED --add-opens=java.base/com.sun.net.ssl.internal.ssl=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.security=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.management/javax.management=ALL-UNNAMED --add-opens=java.naming/javax.naming=ALL-UNNAMED -Dorg.jboss.boot.log.file=/opt/wildfly-00/log/server.log -Dlogging.configuration=file:/opt/wildfly-30.0.1.Final/standalone/configuration/logging.properties -jar /opt/wildfly-30.0.1.Final/jboss-modules.jar -mp /opt/wildfly-30.0.1.Final/modules org.jboss.as.standalone -Djboss.home.dir=/opt/wildfly-30.0.1.Final -Djboss.server.base.dir=/opt/wildfly-00 -c wildfly-0.xml -b 0.0.0.0 -bmanagement 127.0.0.1 -Djboss.bind.address.private=127.0.0.1 -Djboss.default.multicast.address=230.0.0.4 -Djboss.server.config.dir=/opt/wildfly-30.0.1.Final/standalone/configuration/ -Djboss.server.base.dir=/opt/wildfly-00 -Djboss.tx.node.id=wildfly-0 -Djboss.socket.binding.port-offset=0 -Djboss.node.name=wildfly-0 -Dwildfly.statistics-enabled=false Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,090 INFO [org.jboss.modcluster] (ServerService Thread Pool -- 84) MODCLUSTER000032: Listening to proxy advertisements on /224.0.1.105:23364 Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,148 INFO [org.wildfly.extension.undertow] (MSC service thread 1-4) WFLYUT0006: Undertow HTTPS listener https listening on [0:0:0:0:0:0:0:0]:8443 Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,149 INFO [org.jboss.as.ejb3] (MSC service thread 1-3) WFLYEJB0493: Jakarta Enterprise Beans subsystem suspension complete Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,183 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0001: Bound data source [java:jboss/datasources/ExampleDS] Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,246 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-2) WFLYDS0013: Started FileSystemDeploymentService for directory /opt/wildfly-00/deployments Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,285 INFO [org.jboss.ws.common.management] (MSC service thread 1-5) JBWS022052: Starting JBossWS 7.0.0.Final (Apache CXF 4.0.0) Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,383 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,388 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,388 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990 Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,390 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 30.0.1.Final (WildFly Core 22.0.2.Final) started in 2699ms - Started 311 of 708 services (497 services are lazy, passive or on-demand) - Server configuration file in use: wildfly-0.xml Deploy an Application to the WildFly Cluster Now, our three WildFly are running, but the cluster has yet to form. Indeed, with no apps, there is no reason for the cluster to exist. Let’s modify our Ansible playbook to deploy a simple application to all instances; this will allow us to check that the cluster is working as expected. To achieve this, we’ll leverage another role provided by the WildFly collection: wildfly_utils. In our case, we will use the jboss_cli.yml task file, which encapsulates the running of JBoss command-line interface (CLI) queries: YAML … post_tasks: - name: "Ensures webapp {{ app.name } has been retrieved from {{ app.url }." ansible.builtin.get_url: url: "{{ app.url }" dest: "{{ wildfly_install_workdir }/{{ app.name }" - name: "Deploy webapp" ansible.builtin.include_role: name: wildfly_utils tasks_from: jboss_cli.yml vars: jboss_home: "{{ wildfly_home }" query: "'deploy --force {{ wildfly_install_workdir }/{{ app.name }'" jboss_cli_controller_port: "{{ item }" loop: - 9990 - 10090 - 10190 Now, we will once again execute our playbook so that the web application is deployed on all instances. Once the automation is completed successfully, the deployment will trigger the formation of the cluster. Verify That the WildFly Cluster Is Running and the App Is Deployed You can verify the cluster formation by looking at the log files of any of the three instances: Shell … 2022-12-23 15:02:08,252 INFO [org.infinispan.CLUSTER] (thread-7,ejb,jboss-eap-0) ISPN000094: Received new cluster view for channel ejb: [jboss-eap-0] (3) [jboss-eap-0, jboss-eap-1, jboss-eap-2] … Using the Ansible Collection as an Installer for WildFly Last remark: while the collection is designed to be used inside a playbook, you can also use the provided playbook to directly install Wildfly: Shell $ ansible-playbook -i inventory middleware_automation.wildfly.playbook Conclusion Here you go: with a short and simple playbook, we have fully automated the deployment of a WildFly cluster! This playbook can now be used against one, two, three remote machines, or even hundreds of them! I hope this post will have been informative and that it’ll have convinced you to use Ansible to set up your own WildFly servers!
John Vester
Staff Engineer,
Marqeta
Raghava Dittakavi
Manager , Release Engineering & DevOps,
TraceLink