L’automatisation du processus d’integration de codes, des tests et du deploiement pour une architecture en microservices = Tự động hóa quy trình tích hợp lập mã, kiểm thử
Introduction générale
Présentation de l’entreprise FPT-SOFTWARE et l’équipe FHN.DCS
FPT Software, formerly known as The Corporation for Financing and Promoting Technology, was established on September 13, 1988, and is a multinational provider of technology and IT services headquartered in Hanoi, Vietnam, with a workforce of 32,000 employees Ranked first among private companies in the VNR500 list of 2008, FPT is recognized as one of Vietnam's leading firms based on revenue, growth, and other economic factors The company has diversified subsidiaries in telecommunications, real estate, education, and financial services, and continues to thrive despite global economic challenges.
FHN.TCS (FPT Hanoi, Digital Consulting Solutions) is a diverse consulting team that fosters cultural exchange and knowledge sharing During my internship, I was assigned to the FSOFT team, which focuses on scientific research across various themes, benefiting from the collective expertise of different specialized teams within TCS.DCS.
Contexte d’étude et problématique
In a competitive environment, the ability to deliver reliably, quickly, and continuously is essential, which FSOFT prioritizes in its applications Implementing a continuous integration (CI) and continuous deployment (CD) pipeline ensures effective integration, testing, and deployment of applications The company aims to automate various phases of deployment and maintenance of its information system applications.
The evolution of software development processes and the adoption of cloud-native systems are increasingly prevalent, as organizations strive to implement automation-based methodologies to enhance product quality The emergence of various new methodologies and cultures aims to improve and expedite software development, ultimately reducing the time required to bring new solutions to market.
3 logicielles auprès de leurs utilisateurs a un impact important Dans ce cas, il y a des questions qui définissent les problèmes que ce stage cherche à résoudre
- Comment une application basée sur des microservices peut-elle fonctionner dans un environnement cloud-native ?
- Comment les changements apportés à une application peuvent-ils être appliqués de manière sûre et cohérente ?
How can changes made to the application be deployed in a production environment without causing downtime, while allowing for testing to be conducted during the deployment, all in an automated manner?
In summary, the primary objective of this internship is to discover ways to enhance productivity and reduce time-to-market for application changes, all while ensuring a high level of rigor and quality in the developed software.
Les objectifs de ce stage sont :
- Mettre en place un pipeline de livraison continue qui supporte la gestion de configuration
- Mettre en œuvre la pratique du pipeline en tant que code, en anglais ô Pipeline as Code ằ
- Mettre en œuvre la pratique de l’infrastructure en tant que code, en anglais ô Infrastructure as Code ằ Ces objectifs seront appliquộs sur le projet le but de :
- Avec le moins de bogues possibles (encore moins des régressions…)
- En assurant une maintenance facile à long terme
- Mise en commun du développement
Définition et fonctionnement des termes essentiels pour une meilleur compréhension
Cloud Computing
Cloud computing refers to the delivery of computing services such as servers, storage, databases, networking, and software over the internet, commonly known as "the cloud." This approach facilitates faster innovation, flexible resources, and economies of scale Typically, users only pay for the services they utilize, allowing for reduced operational costs, more efficient infrastructure management, and adaptability to changing business needs.
Figure 1 : Cloud Computing ( Ref: https://www.aipsolutions.tech/images/cloudservices21.png )
Les principaux avantages du cloud computing sont les suivants:
The adoption of cloud computing enables businesses to optimize their IT costs by eliminating the capital expenditures associated with purchasing hardware and software, as well as the setup and maintenance of on-site data centers.
Cloud computing services are primarily offered on a self-service and on-demand basis, allowing businesses to provision large amounts of computing resources in just a few minutes, typically with just a few clicks This capability provides companies with significant flexibility and alleviates the burden of capacity planning.
- L'échelle mondiale : Les services d'informatique en nuage offrent notamment la possibilité d'évoluer de manière élastique vous accéder à vos ressources n’importe ó que vous soyez
Cloud computing enhances productivity by eliminating many tasks, such as hardware installation and software patching This allows IT teams to focus on more critical business objectives.
Leading cloud computing services rely on a global network of secure data centers, which are consistently upgraded with the latest generation of fast and efficient hardware.
Cloud computing enhances data reliability by enabling efficient data backup, disaster recovery, and business continuity It is a cost-effective solution, as data can be duplicated across multiple redundant sites within the cloud provider's network.
Cloud service providers offer a variety of policies, technologies, and controls that enhance your overall security posture These measures help safeguard your data, applications, and infrastructure from potential threats.
Différents types de Cloud Computing :
A private cloud is a collection of cloud computing resources that are exclusively utilized by a single organization or business This private cloud can be physically located within the company's own data center, ensuring enhanced security and control over the infrastructure.
A public cloud is owned and operated by a third-party cloud service provider that offers computing resources, including servers and storage, over the Internet.
- Cloud Hybride : Le cloud hybride regroupe des clouds publics et privés, liés par une technologie leur permettant de partager des données et des applications
2.1.3 Différents services offerts par le cloud computing
La plupart des services de cloud computing peuvent être classés en quatre grandes catégories
: IaaS (infrastructure as a service), PaaS (platform as a service) et SaaS (software as a service) (voir figure 2)
Figure 2 : Différents services du cloud computing
(Réf : https://www.ringcentral.com/fr/fr/blog/wp-content/uploads/2022/12/differences-entre-saas-paas-et-iaas-
Infrastructure as a Service (IaaS) is a cloud computing service model that provides essential computing, storage, and networking resources on demand, allowing users to pay only for what they use.
Migrating your organization's infrastructure to an IaaS solution reduces the maintenance of local data centers, saves on hardware costs, and provides real-time business insights.
IaaS vous permet d’éviter les dépenses, ainsi que les problèmes liés à l’achat et à la gestion de serveurs physiques et d’une infrastructure de centre de données [2]
Infrastructure as Code (IaC) tools enable the management of infrastructure through configuration files instead of graphical user interfaces IaC allows for the secure, consistent, and repeatable construction, modification, and management of infrastructure by defining resource configurations that can be modified, reused, and shared.
Manual infrastructure management is time-consuming and prone to errors, especially when dealing with large-scale applications Infrastructure as Code (IaC) allows you to define the desired state of your infrastructure without detailing every step to achieve that state It automates infrastructure management, enabling developers to focus on creating and enhancing applications rather than managing environments Companies leverage IaC to control costs, mitigate risks, and swiftly respond to new business opportunities.
Avantages de l'infrastructure en tant que code :
Easily duplicate an environment: The same environment can be deployed on another system in a different location using the same Infrastructure as Code (IaC), provided that the necessary infrastructure resources are available.
Infrastructure as Code (IaC) minimizes configuration errors and streamlines error verification When issues arise from updates to the IaC code, you can swiftly resolve them by reverting to the most recent stable configuration files.
Figure 3 : Cloud Computing Infrastructure as a code ( Ref : https://www.clickittech.com/devops/infrastructure-as-code-tools/ )
Microservices
L'architecture microservice, également connue sous le nom de "microservices", est une méthode de développement qui décompose les logiciels en modules (voir figure 4) dotés de fonctions spécialisées et d'interfaces détaillées [4]
In a microservices architecture, each microservice is designed to perform a specific task and communicates with clients or other microservices through lightweight communication mechanisms, such as REST API requests.
Monolithic architecture refers to applications designed as large, autonomous units that are tightly interconnected, making modifications challenging A minor change in one function may require recompiling and testing the entire system, which contradicts the agile approach favored by developers today Additionally, scaling monolithic applications is difficult, as updating a specific function necessitates changes to the entire application.
Figure 4 : Comparaison entre l’architecture Monolithique celle des microservices
(Réf : https://docs.oracle.com/en/solutions/learn-architect-microservice/img/monolithic_vs_microservice.png )
Le tableau suivant résume les différences entre les architectures microservices et monolithiques
Caractéristiques Architecture Microservices Architecture Monolithique
L'application est constituée de services faiblement couplés Chaque service prend en charge une seule tâche commerciale
L'ensemble de l'application est conỗu, dộveloppộ et déployé en une seule unité
Les microservices définissent des API qui exposent leurs fonctionnalités à n'importe quel client Les clients peuvent même être d'autres applications
Les possibilités de réutilisation des fonctionnalités entre les applications sont limitées
Chaque microservice est déployé indépendamment, sans affecter les autres microservices de l'application
Chaque microservice est déployé indépendamment, sans affecter les autres microservices de l'application
La fonctionnalité de l'application est répartie entre plusieurs services Si un microservice tombe en panne, la fonctionnalité offerte par les autres microservices reste disponible
La défaillance d'un composant peut affecter la disponibilité de l'ensemble de l'application Évolutivité
Chaque microservice peut être mis à l'échelle indépendamment des autres services
Chaque microservice peut être mis à l'échelle indépendamment des autres services
Décentralisé : Chaque microservice peut utiliser sa propre base de données
Centralisée : L'ensemble de l'application utilise une ou plusieurs bases de données
Chaque microservice peut être développé à l'aide d'un langage de programmation et d'un cadre qui conviennent le mieux au problème que le microservice est censé résoudre
En général, l'ensemble de l'application est écrit dans un seul langage de programmation
Communication au sein de l'application
Pour communiquer entre eux, les microservices d'une application utilisent le modèle de communication demande-réponse
L'implémentation typique utilise les appels API REST basés sur le protocole HTTP
Les procédures internes (appels de fonction) facilitent la communication entre les composants de l'application Il n'est pas nécessaire de limiter le nombre d'appels de procédures internes
Tableau 1 : Comparaison entre Architecture Microservices et celle de Monolithique
2.2.1 Diffộrentes faỗons dont les microservices communiquent
Microservices are gaining popularity due to their ability to enhance scalability, maintainability, and overall development agility A key factor in the effective functioning of microservices is their communication mechanism Let's explore the four communication modes of microservices.
1) Communication avec les clients par le biais d'API RESTful :
Microservices communicate with clients through RESTful APIs, providing a standardized and platform-independent approach Clients can send HTTP requests to specific endpoints exposed by the microservices to retrieve necessary data or perform actions.
In complex business processes, microservices frequently need to communicate with one another This inter-service communication can be achieved using various protocols, including HTTP, gRPC, or message queues like RabbitMQ.
En communiquant directement, les microservices peuvent collaborer habilement pour fournir une expérience d'application cohésive
3) Communication avec les magasins de données :
Microservices typically have their own databases or data stores To maintain data consistency, they communicate with their respective data stores when reading or writing data This approach ensures that each service can manage its data independently, without interfering with other services.
Microservices rely on essential infrastructure components such as service discovery, load balancers, and API gateways Service discovery enables the dynamic location and connection to other microservices, while load balancers distribute incoming requests across multiple service instances to ensure high availability and scalability API gateways act as intermediaries between clients and microservices, managing crucial tasks like authentication, rate limiting, and caching.
2.2.2 Types de communication des microservices
When developing a web application using a microservices architecture, understanding the various types of communication between microservices is crucial for successful implementation This article will explore two essential types of communication between microservices.
In synchronous communication, a client sends a request to a microservice and waits for a response before proceeding, resembling a traditional client-server model RESTful APIs are commonly used for this type of communication, where the client issues an HTTP request to the microservice, which processes the request and sends back a response While this method simplifies development, it can lead to performance bottlenecks and potential cascading failures if a microservice experiences downtime.
Asynchronous communication allows clients to send messages to a microservice without expecting an immediate response Instead, the microservice processes the message and may respond later or not at all, depending on the situation This method is often facilitated by message brokers like RabbitMQ or Kafka, enhancing system resilience since services can operate independently even if some are temporarily unavailable However, it also introduces complexities in managing eventual consistency and message processing.
2.2.3 Comment les microservices communiquent-ils ?
Microservices architecture enhances effective communication among various services, facilitating seamless collaboration and improved scalability The three primary communication methods utilized by microservices include:
Microservices frequently communicate using the HTTP protocol, enabling interaction through standard HTTP methods like "GET, POST, PUT, and DELETE." This "RESTful API" communication ensures service decoupling and platform independence Additionally, HTTP-based communication is well-suited for request-response scenarios and synchronous interactions.
Message-based communication utilizes message brokers or queues to facilitate asynchronous interactions between microservices This approach ensures loose coupling, fault tolerance, and scalability Services can send messages to queues, allowing other services to consume them at their own pace, thereby enhancing resilience in distributed systems.
3) Communication pilotée par les événements (Event-Driven Communication) :
In event-driven communication, microservices emit events when specific actions occur, allowing other services to subscribe and respond accordingly This model enables loose coupling between services and facilitates real-time reactions to changes within the system Event-driven communication is essential for implementing event sourcing architectures and event-driven architectures.
Microservices enable applications to independently adjust specific services based on demand This modular approach allows developers to add or modify features without impacting the entire application, making it highly adaptable to evolving needs.
DevOps
Today, businesses are adopting a dynamic, customer-centric approach to the development and delivery of their applications As customers increasingly prefer digital transactions in the mobile era, the role of application developers becomes essential to enhancing the customer experience Simultaneously, the trend towards agility has inspired DevOps, emphasizing the importance of professional interactions over processes and tools.
In recent years, development and operations teams have significantly improved their collaboration, but the need for realignment between these two groups has become more pressing This demand has given rise to the DevOps movement, which embodies a philosophy that fundamentally transforms how IT professionals view system stability and functionality, as well as their roles in the end-to-end value stream Additionally, Cloud Computing and Software-Defined Networking (SDN) have played crucial roles in breaking down the silos that once separated development and operations teams.
DevOps is a set of practices that emphasizes collaboration and communication between software developers and IT operations professionals by automating the software delivery process and infrastructure changes The term "DevOps" is a blend of "development" and "operations," aimed at enhancing communication between these two teams.
DevOps aims to establish a culture and environment that enables the rapid, frequent, and efficient design, testing, and deployment of software It transcends being merely a methodology; instead, it embodies a true work philosophy.
1 Amélioration de la qualité du code, des produits et des services (réduction des anomalies, taux de réussite des changements plus important, etc.)
2 Efficacité accrue (par exemple, optimisation du temps consacré aux activités qui créent de la valeur ajoutée: une valeur ajoutée sans précédent pour le client)
3 Amélioration du délai de mise en place sur le marché
4 Meilleur alignement entre l’informatique et les métiers
5 Des versions de plus petite taille fournies très rapidement et très fréquemment
6 Amélioration de la productivité, satisfaction du client, satisfaction du personnel
7 Moins de risques et moins de retours arrière
8 Réduction des cỏts à long terme
(Ref : https://images.clickittech.com/2020/wp-content/uploads/2023/11/17193621/DevOps-Architecture.jpg )
Intégration Continue et Livraison Continue (CI/CD)
In today's rapidly changing landscape, software companies face the significant challenge of swiftly responding to market and customer demands The CI/CD methodology has emerged as a crucial solution to address this challenge effectively.
CI/CD stands for Continuous Integration and Continuous Delivery, representing a combined approach to software development These practices are currently the most widely adopted methods for reducing the development and delivery cycle time of software.
(Réf : https://miro.medium.com/v2/resize:fit:1100/format:webp/0*7DGzTFK-YV66JcFA.png )
Continuous Integration (CI) is a DevOps best practice that automates the integration of code changes from multiple contributors into a single software project This approach enables developers to frequently merge code modifications into a central repository, where automated builds and tests are executed Utilizing automated tools, CI ensures the accuracy of new code before it is integrated, enhancing the overall development process.
A source code version control system is essential to the integrated development process This version control system is further enhanced by additional measures, including automated code quality testing and syntax style review tools.
In an Integrated Continuous (IC) practice, developers build, run, and test code on their own workstations before integrating it into the version control repository Once changes are made to the repository, a sequence of events is triggered, starting with the compilation of the latest version of the source code If the compilation is successful, unit tests are executed; upon their success, the version is deployed to testing environments for system tests, typically using automated testing methods The team is kept informed of the process's progress, and a report is generated detailing aspects such as the version number, defects, and the number of tests conducted.
Continuous delivery is the ability to safely and rapidly implement various changes, including new features, configuration adjustments, bug fixes, and experiments, into production or directly to users in a sustainable manner.
Our goal is to make deployments—whether for large-scale distributed systems, complex production environments, embedded systems, or applications—predictable, routine, and achievable on demand.
We achieve this by ensuring that our code is always in a deployable state, even with thousands of developers making daily changes This approach completely eliminates the traditional phases of integration, testing, and hardening that followed the "dev complete" stage, as well as code freezes.
La stratégie de déploiement bleu-vert (blue-green deployment)
The blue/green deployment model is an application release strategy that facilitates the gradual transition of user traffic from an older version of an application or microservice to a nearly identical new version, with both versions running concurrently in the production environment.
The old version represents the blue environment, while the new version represents the green environment Once the traffic transfer between the blue and green environments is complete, the old version can either be retained for potential restoration or removed from the production environment to be updated as a model for future updates.
(Réf : https://semaphoreci.com/wp-content/uploads/2020/08/bg3a-1.webp)
(Réf : https://semaphoreci.com/wp-content/uploads/2020/08/bg3a-1.webp )
Le processus de déploiement bleu/vert fonctionne comme suit :
Deploy the new version (green) concurrently with the current version (blue) Ensure thorough testing to verify its functionality, making necessary adjustments as needed.
Switching traffic: Once the new version is ready, transition all traffic from the blue version to the green version (see Figure 8) This process should be executed seamlessly to ensure that end users experience no interruptions.
- Surveiller : surveiller ộtroitement la faỗon dont les utilisateurs interagissent avec la nouvelle version et repérer les erreurs et les problèmes
In the event of an issue during deployment, promptly revert traffic to the blue version If no problems arise, keep traffic directed to the green version, which then becomes the current blue version This allows for a new version to be deployed simultaneously as the "new green version."
Dans un modèle de déploiement bleu-vert, l'environnement de production change avec chaque version (voir figure 9) :
(Réf : https://octopus.com/docs/deployments/patterns/blue-green-deployments/images/blue-green-versions.png )
Monitoring
Monitoring, or surveillance in French, refers to the practice of understanding how software components operate in remote environments Observability, within a software context, is the ability to comprehend a software component solely by examining its external outputs.
Qu'est-ce que la surveillance des applications ?
Application monitoring is a data-driven process that evaluates the performance of an application to ensure it meets user expectations Developers track potential bugs, monitor performance and availability, assess efficiency, and analyze how end users interact with the application.
By monitoring application logs, it becomes possible to promptly address any issues that may arise, continuously enhance the app's speed and responsiveness, and improve the user experience based on feedback from real users.
L'importance du contrôle des applications :
- Assure le contrôle de la qualité
- Fournit des informations sur les performances et l'efficacité de l'application
- Permet d'adhérer aux stratégies budgétaires post-déploiement
- Amộliore la visibilitộ sur la faỗon dont les utilisateurs se servent de l'application dans des situations réelles
- Favorise le partage d'informations au sein de l'équipe
- Réduit les risques liés aux bogues, aux temps d'arrêt et à d'autres problèmes
In general, monitoring is crucial for maintaining the health and performance of systems and applications It enables proactive management, ensures high availability, and supports continuous improvement by providing valuable insights into system behavior.
DevSecOps
DevSecOps is the practice of integrating security testing at every stage of the software development process It involves tools and processes that promote collaboration among developers, security specialists, and operations teams to create software that is both efficient and secure DevSecOps fosters a cultural transformation, making security a shared responsibility among everyone involved in software development.
DevSecOps stands for development, security, and operations, representing an evolution of the DevOps practice Each component delineates distinct roles and responsibilities for software teams involved in building software applications.
Stack
Un stack est un ensemble de ressources AWS que vous pouvez gérer comme une unité unique
In other words, you can create, update, or delete a set of resources by managing stacks All resources within a stack are defined by the AWS CloudFormation template associated with that stack For instance, a stack may include all the necessary resources to run a web application, such as a web server, a database, and networking rules If the web application is no longer needed, simply deleting the stack will remove all associated resources.
Comme AWS CloudFormation traite les ressources du stack comme une unité unique, elles doivent toutes être créées ou supprimées avec succès pour que le stack soit créé ou supprimé
Si une ressource ne peut pas être créée, AWS CloudFormation reprend le stack et supprime
All resources created will be automatically managed If a resource cannot be deleted, the remaining resources will be retained until the stack can be successfully removed.
Vous pouvez gérer les piles à l'aide de la console AWS CloudFormation, de l'API, de la CLI AWS ou du gestionnaire d'applications.
Etat de l’art
Démarche de l’étude comparative des outils DevOps
This section of the document analyzes the current tools available in the market that aim to address issues related to source code management, infrastructure creation, continuous integration and delivery (CI/CD), and application monitoring With numerous tools available, the selection process depends on various criteria To effectively organize this study, which serves as a preliminary phase before implementing the pipeline, we will follow specific steps.
- Lister les outils les plus connus aux marchés et sélectionner les plus intéressants pour les analyser et étudier
- Élaborer des critères de comparaison pertinents et rigoureux De plus, nous ne tiendrons pas compte des critères financiers malgré leur importance
- Comparer les outils selon les critères établis pour sortir avec des tableaux comparatifs, avant d’en faire la synthèse et identifier l’outil approprié
Pour ce qui suit, nous allons présenter, pour chaque catégorie d’outils, l’étude comparative complète qui reflète exactement la démarche que nous venons de décrire
3.1.1 Outils de gestion de code source
Source Code Management (SCM) systems are essential tools that enable teams to collaborate effectively on their project's source code repository These tools track changes made to the codebase over time, facilitating seamless modifications and teamwork.
Teams using source code management systems can collaborate simultaneously on the same project Individual modifications and updates are added to the source code repository as commits, which encapsulate all changes These grouped modifications can be reviewed and updated individually, or the entire group can be reverted if necessary Access to this comprehensive commit history aids in identifying bugs and restoring previously removed features This section will explore the strengths and weaknesses of the three leading cloud-based Git platforms: GitHub, GitLab, and Bitbucket.
When selecting a source code management platform for your project, it's essential to consider various factors, including collaboration features, available automation, user pricing, the cost of executing CI/CD workflows, integrations, and security and code quality controls The list of considerations continues to grow.
GitHub is a source code management platform that enables developers to manage, track, and store their code efficiently Ideal for large teams, it offers various features like branch creation and merging, facilitating seamless collaboration among team members.
- Livré avec des outils de collaboration
- Intégration facile avec les outils de contrôle de version et de suivi des bogues
- Prend en charge le déploiement basé sur le cloud
- Facile à suivre et revoir les modifications apportées à la base de code
GitHub offers both free and paid packages to cater to various needs The free tier includes essential features for source code management, while the paid plans come with advanced functionalities, such as GitHub Copilot—an AI-powered tool designed to enhance code writing through intelligent suggestions.
Similar to GitHub, GitLab is a Git-based repository hosting platform launched in 2011, aiming to differentiate itself by offering a comprehensive product for the entire DevOps lifecycle GitLab integrates essential tools such as Issue Tracker, continuous integration, and continuous delivery, providing a unified interface for DevOps processes Today, over 100,000 organizations, including IBM, Sony, NASA, and Alibaba, utilize GitLab for their development needs Key features of GitLab include its all-in-one DevOps solution and robust collaboration capabilities.
- GitLab Pages : GitLab Pages vous permet de créer des sites web dynamiques avec
GitLab Cette fonctionnalité vous permet de créer et de gérer facilement des sites web sans avoir à apprendre le codage
- GitLab CI : GitLab CI vous permet d'automatiser les tests de votre code à l'aide d'une variété d'outils de test
- GitLab Flow : GitLab Flow est un outil d'intégration continue (CI) qui aide les développeurs à automatiser le processus de construction, de test et de déploiement de leur code
GitLab Teams is a collaboration platform designed to help you organize your team into projects and groups, assign roles and permissions, and monitor all activities related to these projects and teams in real-time.
- GitLab Enterprise Edition : L'édition Enterprise de GitLab offre des fonctionnalités supplémentaires telles que des mesures de sécurité avancées, l'évolutivité, la performance et la fiabilité
BitBucket est un autre service d'hébergement de code source en ligne BitBucket a été lancé en
In 2008, Bitbucket initially operated solely with Mercurial, a free distributed version control system However, following its acquisition by Atlassian in October 2011, it began supporting Git as well This acquisition brought significant advantages, as Atlassian is known for its popular software tools like Jira, Trello, and Confluence, allowing Bitbucket to seamlessly integrate with these platforms, enhancing its functionality and appeal.
Parmi les fonctionnalités les plus populaires de BitBucket [17], on peut citer
Version control is essential for developers, and BitBucket offers a robust solution for tracking code modifications By utilizing version control, developers can prevent errors that often occur when working with unversioned code, ensuring a more efficient and error-free development process.
BitBucket offers source control that enables developers to track changes made to their project's source files This feature ensures that developers are always aware of the project's status and facilitates easier collaboration with other team members.
One of the standout features of BitBucket is its code review functionality, which allows developers to request feedback from peers before submitting their code for publication This process ensures that the code is accurate and meets the standards established by the team.
BitBucket offers a task management feature that enables developers to organize their tasks more efficiently This functionality makes it easier to monitor progress and meet deadlines, enhancing overall productivity.
Le tableau [2] synthétise la comparaison des gestionnaires de code source présentés préalablement, selon les critères établis et exposés auparavant
Propriétaire Atlasian GitLab Inc Microsoft
25 Écrit-en : Python Ruby, Go, Vue.js Ruby
CI Intégré ✔ ✔ 🗶 - (Des applications tierces peuvent être utilisées pour exploiter les fonctionnali tés de l'IC.)
Tableau 3 : Synthétisation de la comparaison des concurrents dans de le domaine de gestion du codes sources
Using this comparative table, we can select our source code management tool based on the important criteria we've established It is evident that GitHub's features align perfectly with our requirements.
Outils de provisionnement d'infrastructure
Infrastructure provisioning refers to the process of setting up and configuring the necessary hardware, software, and network resources to support an application or system In the context of AWS, this involves creating and configuring various resources required for running your applications or services When building infrastructure for a cloud-native application, there are several methods to accomplish this task, one of which is manually accessing the cloud service provider's platform and creating the different components individually.
Another method is the Infrastructure as Code (IaC) approach, which utilizes configuration templates treated like software source code This allows for the creation of infrastructure in a systematic and efficient manner.
The 26 model helps reduce errors in the creation and configuration of application infrastructure, allowing developers to save time This saved time can be redirected towards creating files that contain scripts outlining the desired infrastructure setup.
AWS CloudFormation is a powerful Infrastructure as Code (IaC) service offered by Amazon Web Services (AWS) that allows users to declaratively define and provision AWS infrastructure and resources With CloudFormation, you can create and manage various AWS resources, including Amazon EC2 instances, Amazon RDS databases, and Amazon S3 buckets, using templates written in JSON or YAML The use of CloudFormation offers several advantages, making it an essential tool for managing cloud resources efficiently.
Voici les 5 principaux avantages d'AWS CloudFormation :
Infrastructure as Code (IaC) with AWS CloudFormation streamlines infrastructure management by defining resources through code, enhancing version control and reproducibility This approach increases efficiency, simplifies management, and allows for customization of resources at the code level.
- Automatisation : Automatiser l'approvisionnement en ressources, les mises à jour et les suppressions, afin de réduire les interventions manuelles et les erreurs potentielles
- Gestion de la dépendance des ressources : Gérer automatiquement les dépendances des ressources, en veillant à ce que l'ordre de création/mise à jour des ressources soit correct
- Évolutivité : Créez et gérez facilement des architectures évolutives, y compris des groupes Auto Scaling et des équilibreurs de charge
- Gestion des cỏts : Obtenez une visibilité sur les cỏts d'approvisionnement des ressources, suivez les dépenses et contrôlez les dépenses pour les piles et les ressources individuelles
Terraform's strength lies in its exceptional ability to deliver infrastructure as code, making it a cornerstone for cloud migration consulting services This remarkable feature enables businesses to leverage the advantages of cloud technology effectively.
Automation, repeatability, and scalability are crucial for managing complex infrastructure environments In the realm of DevOps and cloud operations, including cloud migration consulting services, Terraform stands out by streamlining the deployment and management of infrastructure.
Voici les 5 principaux avantages de Terraform :
- Infrastructure en tant que code (IaC) : Terraform vous permet de définir et de gérer l'infrastructure à l'aide de code, ce qui favorise l'automatisation, le contrôle des versions et la répétabilité
Multi-cloud management: Terraform is vendor-agnostic and supports multiple cloud providers, minimizing vendor lock-in and enabling you to manage resources across various environments effectively.
- Gestion des dépendances : Terraform gère automatiquement les dépendances des ressources, garantissant que les ressources sont créées ou mdoifiées dans le bon ordre
The Terraform plan command allows users to preview changes before they are applied, enhancing visibility and control over infrastructure modifications.
Terraform supports modularity through reusable modules, enabling standardization and sharing of configurations across projects and teams, which enhances collaboration and efficiency.
Le tableau ci-dessus synthétise la comparaison des outils de provisionnement d’infrastructures présentés préalablement, selon les critères établis et exposés auparavant
Langue de configuration HCL (HashiCorp
Type de produit On-premise & SaaS SaaS
Fournisseur et écosystème Prise en charge multi-cloud ; polyvalent
Prix& Cỏt Généralement gratuit ; payant pour les ressources
Généralement gratuit ; payant pour les ressources
Gestion des ressources Nécessite une gestion explicite des dépendances
Gestion de l'État Dossier séparé pour l'État ; plus de contrôle
Géré par AWS ; accès limité
Tableau 4 : Synthétisation de la comparaison des concurrents dans de le domaine de provisionnement d’infrastructures
Using this comparative table, we can select our source code management tool based on the key criteria we've established It is evident that the features of AWS CloudFormation align perfectly with our requirements.
Outils de gestion de configuration et d’automatisation
Configuration management tools are essential for managing changes and updates within a company's infrastructure They streamline tedious manual tasks, allowing businesses to save time, minimize the risk of human errors, and enhance workplace productivity Among the most popular configuration management tools utilized by companies are Ansible, Puppet, SaltStack, and Chef.
Ansible is a powerful open-source automation tool that operates across multiple platforms, making it a favorite among DevOps professionals for resource provisioning It is extensively utilized to deliver continuous software code through the concept of "infrastructure as code."
Ansible is a leading enterprise automation solution that operates on Unix-like platforms and manages various systems, including Unix and Microsoft architectures It automates tasks without agents through SSH or Windows Remote Management connections, enhancing the efficiency, scalability, and reliability of IT infrastructure Key features and benefits of Ansible include its simplicity, flexibility, and ability to streamline complex processes.
Ansible est un outil polyvalent qui convient aussi bien aux experts en automatisation qu'aux développeurs ordinaires Il facilite la configuration rapide de réseaux entiers et prend en charge
The article discusses 29 essential tasks that enhance business efficiency, including software installation, automation of daily tasks, infrastructure provisioning, security enhancement, compliance management, and comprehensive enterprise automation.
Ansible operates without the need for agent installation on managed machines, minimizing the risk of errors and cyber security threats It manages master-agent interactions through standard SSH or the Paramiko module, ensuring efficient resource utilization and lower maintenance costs.
Ansible supports Python integration, allowing users to manage nodes, respond to Python events, develop plugins, and import data from external sources through its Python API Built on Python, Ansible simplifies both installation and usage for developers.
SSH Security: Ansible utilizes the secure SSH protocol for communication, eliminating the need for passwords and enhancing security It connects to clients via SSH, sends modules, executes them locally, and receives the results in return.
Ansible employs a push-based architecture, where parameters are simultaneously written and pushed to nodes This approach enables rapid configuration changes across multiple servers, enhancing both efficiency and control in system management.
Ansible enhances installation efficiency by utilizing playbooks, roles, inventories, and variable files, enabling organized and precise task execution It streamlines the automation of system configuration, software installation, continuous delivery, and seamless deployments, ensuring minimal service interruption.
SaltStack, also known as Salt, is a powerful configuration management and orchestration tool It stands out as one of the top alternatives to Ansible, enabling system administrators to automate server provisioning and management tasks efficiently.
- Offre une interface de programmation simple
- Modules préconstruits pour prendre en charge des centaines d'applications
- L'API puissante interagit facilement avec d'autres systèmes
- SaltStack est conỗu pour gộrer dix mille minions par maợtre
Puppet est un outil de gestion de système efficace qui rationalise et automatise la gestion de la configuration [19]
- Puppet fonctionne comme un outil de déploiement de logiciels
- Il offre une gestion de configuration open-source pour la configuration, la gestion, le déploiement et l'orchestration de serveurs
- Puppet est conỗu pour la gestion de la configuration des systốmes Linux et Windows
- Il est conỗu en Ruby et utilise son langage spộcifique de domaine (DSL) pour dộfinir les configurations du système
- Langage déclaratif : Puppet utilise un langage déclaratif pour définir les configurations, ce qui permet de spécifier facilement l'état souhaité des systèmes sans préciser les étapes pour y parvenir
- Compatibilité multiplateforme : Puppet est agnostique et peut gérer des configurations sur différents systèmes d'exploitation, ce qui simplifie la gestion d'infrastructures multiplateformes
Puppet enables centralized configuration management by storing configuration data on a Puppet master server, allowing for efficient management of multiple nodes from a single location.
Puppet effectively manages system resources, such as files, packages, and services, ensuring they remain in their desired state This capability guarantees consistency and reliability across the system.
- Extensibilité : Puppet peut être étendu grâce à des modules et des plugins personnalisés, ce qui permet aux utilisateurs de l'adapter à leurs besoins spécifiques en matière de configuration et d'automatisation
Chef is a DevOps tool designed for configuration management, enabling automation, testing, and streamlined deployment of infrastructure through code It operates on a client-server architecture and is compatible with various platforms, including Windows, Ubuntu, CentOS, and Solaris Additionally, Chef integrates with popular cloud platforms like AWS, Google Cloud Platform, and OpenStack Before delving into Chef's features, it is crucial to grasp the concept of configuration management.
- Gestion sans effort de nombreux serveurs avec un minimum de personnel : Le Chef gère efficacement de nombreux serveurs avec une petite équipe d'employés
- Compatibilité avec divers systèmes d'exploitation : Chef est compatible avec de nombreux systèmes d'exploitation, notamment FreeBSD, Linux, Windows, etc
- Maintenance du plan de l'infrastructure : Chef maintient un plan détaillé de l'ensemble de l'infrastructure, ce qui garantit la visibilité et le contrôle
Seamless integration with major cloud service providers enhances versatility and adaptability, making it an essential tool for businesses looking to optimize their operations in the cloud.
Centralized management is a key feature of Chef, which acts as a single server hub for deploying policies and ensuring consistency across the entire infrastructure.
Le tableau ci-dessus synthétise la comparaison des outils de configuration et d’automatisation présentés préalablement, selon les critères établis et exposés auparavant
Sans Agent 🗶 - Master-agent 🗶 - Master-agent ✔ 🗶
Dépendance client Ruby, sshd, bash Ruby Python, sshd, bash
Mécanisme Push Pull Pull Push
Peu facile Difficile Facile Facile
Approche Procédurale Déclarative Procédurale Procédurale &
Tableau 5 : Récapitulatif de comparaison des concurrents dans le domaine de gestionnaires de configuration
Outils d'intégration continue
Les logiciels d’intégration continue (CI) permettent aux développeurs de faire un commit de code vers un plus grand référentiel, aussi souvent qu’ils le souhaitent Ils automatisent les
33 changements de code dans les projets logiciels Il en existe plusieurs sur le marché, nous avons choisi de comparer Jenkins le plus connu, CircleCI, GitLab CI
CircleCI is a powerful CI/CD tool designed for rapid software development and deployment It automates the entire user pipeline, streamlining processes from code building to testing and deployment.
Il est doté d'une API dédiée et de centaines d'orbes (paquets réutilisables de configuration CircleCI), cette plateforme CI/CD [20] est préférée par plus d'un million d'ingénieurs dans le monde
In 2023, over 5,500 companies worldwide adopted CircleCI as their CI/CD tool, enabling them to accelerate product development, promptly fix bugs, and discover new features throughout the development process Notable users of CircleCI include industry leaders such as Stripe, Tokopedia, Lyft, StackShare, and Delivery Hero, showcasing its comprehensive end-to-end solutions for diverse development needs.
GitLab is a popular CI/CD tool that automates software development and testing processes, streamlining workflows and accelerating software delivery It provides dedicated pipeline configuration and artifact storage solutions, enabling smarter coding and enhanced performance across various environments This ultimately improves the production deployment pipeline.
Avec plus de 100 000 utilisateurs, GitLab [20] est même utilisé par des organisations de premier plan telles que Sony, Goldman Sachs, IBM et la NASA, pour n'en citer que quelques- unes
Jenkins is a well-known open-source CI/CD tool that streamlines the development cycle by automating tasks, enhancing overall software quality, and accelerating the delivery process It ensures that your code is fully validated and up to date Jenkins is particularly beneficial for implementing CI/CD in large organizations, helping to prevent potential code conflicts and simplifying complex troubleshooting.
Jenkins compte plus de 100 000 utilisateurs dans le monde entier, dont de grands noms tels que Facebook, Netflix, Instacart, LinkedIn et Twitch, parmi beaucoup d'autres
Le tableau ci-dessus synthétise la comparaison des outils d’intégration continue présentés préalablement, selon les critères établis et exposés auparavant
Configuration et installation Facile Facile Aucune installation,
Intégration de Git Jenkins prend en charge Git par le biais de plugins
GitLab prend en charge Git en mode natif
CircleCI supporte Git à travers des webhooks et des plugins
GitLab, GitHub, Bitbucket, TFS, CVS,
Tableau 6 : Synthétisation de la comparaison des concurrents dans de le domaine de gestionnaires de pipelines d’intégration
Using this comparative table, we can select our source code management tool based on the important criteria we've established We find that the features of CircleCI align perfectly with our requirements.
Outils de gestion de monitoring
To achieve application observability, it is essential to implement a monitoring strategy that extracts metrics and logs from each service Monitoring solutions offer a visual means to observe the connections between various events across services and their behavior in a production environment While there are numerous monitoring tools available, this document will briefly highlight a few key options.
Amazon CloudWatch is an observability and monitoring service provided by AWS that enables users to collect and track metrics, monitor log files, set alarms, and respond to changes in their CloudWatch resources.
AWS CloudWatch is a crucial service for effectively managing and monitoring your AWS resources By implementing CloudWatch correctly, you can enhance the reliability, performance, and cost-effectiveness of your infrastructure and applications.
Prometheus is a monitoring system designed for cloud-native environments, originally developed by SoundCloud in 2012 Since merging with the Cloud Native Computing Foundation (CNCF) in 2016, it has become the second most popular project after Kubernetes.
Prometheus collects metrics from targets at specified intervals using a server that flattens all data into untyped time series It features a pull-based data model for scraping metrics, built-in alerting capabilities with Alertmanager, a multidimensional data model, and PromQL (Prometheus Query Language) for querying data With numerous integrations and a strong open-source community, Prometheus has become widely adopted for cloud-native monitoring Its use cases span various sectors, including DevOps, finance, healthcare, and real-time tracking Users can access data in three ways: through graphical visualization, tabular format, or via the HTTP API.
Le tableau ci-dessus synthétise la comparaison des outils d’intégration continue présentés préalablement, selon les critères établis et exposés auparavant
Plateformes prises en charge SaaS / Web, On-Premises Windows, Mac, SaaS / Web, iPhone, iPad, Android
Tableau 7 : Récapitulatif de comparaison des concurrents dans le domaine de la surveillance des applications
Using this comparative table, we can select our source code management tool based on the important criteria we've established We find that Prometheus's features align perfectly with our requirements.
Service tiers pour la sauvegarde des informations (Third Party Secret Keepers)
When each task is executed, a new instance is launched using the specified Docker image, ensuring that files and memory used during the job are deleted upon completion This isolation allows each task to operate independently from others and their parent workflows Therefore, it became essential to find a method for sharing information between jobs.
CircleCI allows users to save and share files and folders, but how do we handle bits of information? Utilizing CircleCI's built-in options requires writing data to a file, caching or persisting that file, retrieving it in another job, and then parsing the information into a variable There are several intriguing options to consider for managing this process effectively.
37 un gardien de secret (3rd Party Secret Keepers) qui est un service tiers qui recevra et fournira notre secret sur commande Trois excellentes options sont : Vault, MemStash, KVDB
HashiCorp Vault is a secure tool designed for managing sensitive information, commonly referred to as secrets These secrets include digital certificates, database credentials, passwords, and API encryption keys With HashiCorp Vault, users can store these secrets and subsequently authenticate, validate, and authorize access for clients and users.
3.6.2 KVBD.io kvdb.io est un magasin de valeurs clés en tant que service pour stocker des données arbitraires et des métriques provenant d'applications, d'appareils IoT [24] et de tout ce qui est connecté à Internet Un appareil ou un million, kvdb.io fournit une API simple que tout le monde peut utiliser
Plateformes prises en charge SaaS
Tableau 8 : Récapitulatif de comparaison des concurrents du service tiers pour la sauvegarde des informations
Synthèse et choix
À l’issu de chaque étude comparative, nous avons défini notre choix technologique parmi les outils énoncés, mais nous allons récapituler
- Comme gestionnaire de code source nous avons opté pour GitHub
- Pour la gestion de configuration, nous avons utilisé Ansible Et comme plateforme d’intégration continue nous choisis CircleCI
- Comme outil de provisionnement d’infrastructure nous avons jeté notre dévolu sur AWS CloudFormation
Le tableau ci-dessous récapitule nos choix
GitHub Gestionnaire de code source
KVB.io Outil pour la sauvegarde des informations
CloudFormation Outil de provisionnement d’infrastructure
Etude de l’existant
Dans Cette partie, nous présentons le pipeline de livraison continue existant chez FSOFT ainsi que nos remarques sur l’état actuel afin de donner une solution aux différents problèmes
Pour gérer le développement et la livraison de ses projets, l’entreprise FSOFT dispose d’un pipeline d’intégration et livraison continue Ce pipeline se constitue de [Figure 10] :
- Gestionnaire de code source GitHub
- Stratégie du déploiement Canary (Canary deployment)
Figure 10 : Architecture du pipeline initial
In this context, the Jenkins automation server serves as an orchestrator among various tools through tasks A task refers to the processes of compiling, testing, packaging, deploying, configuring, or executing actions on a project.
Les tâches de Jenkins apparaissent sous plusieurs formes :
Compilation, et test unitaire d’un projet
Empaquetage d’un projet pour une livraison
Déploiement en environnement de production, test, ou développement
De plus des environnements ó sont installés les outils du pipeline, chaque projet dispose des environnements de déploiements suivants :
- Build : Tout ce qui a trait au fait de rendre le code exécutable en production (par exemple, Compiler) L'objectif est de produire un artefact
- Test : Tous les tests automatisés qui vérifient au niveau du code
- Deploy infrastructure : Tout ce qui concerne la création d'instances de serveur ou la copie de fichiers d'application préconstruits dans une instance
- Configure –infrastructure : Fait référence au processus de configuration de l'instance
EC2 qui fonctionnera comme serveur backend pour l'application
- Cleanup : Chargé d'éliminer les anciennes stacks et ressources qui n’est plus nécessaires dans notre environnement AWS
Suite à la vue globale présentée ci-dessus, nous avons constaté les problématiques suivantes que l’entreprise vise à résoudre :
Incohérences de données entre l'application et la base de données, provoquant des erreurs ou un comportement inattendu
In the event of a database schema change requiring migration, the deployment process may fail if a migration execution task is not in place This can lead to deployment failures and result in application downtime.
L'exécution manuelle des tests de fumée pour vérifier la fonctionnalité de base de l'application prend beaucoup de temps et être sujets à des erreurs humaines
Les déploiements de canaris présentent les inconvénients suivants :
Complexité au processus de déploiement
Les tests doivent être effectués à la fois sur la version canari et sur la version stable de l'application
Incohérences dans l'expérience de l'utilisateur Certains utilisateurs peuvent avoir accès à de nouvelles fonctionnalités ou améliorations, tandis que d'autres non, ce qui peut entraợner une certaine confusion ou frustration.
Solution proposée
L’architecture proposée
Understanding abstract concepts like a CI/CD pipeline is crucial before delving into the details This foundational knowledge will also enable you to effectively communicate with non-technical individuals about the processes occurring within the pipeline.
The pipeline is designed to execute tasks sequentially, meaning that if one job fails, it halts the execution of subsequent jobs In such instances, the pipeline stops, preventing any remaining jobs from running This approach ensures that no further actions are taken in the event of a critical step failure Let's take a look at the diagram we have created (see Figure 11).
Dans ce diagramme, nous avons décidé d’ajouter des jobs qui constituent notre pipeline CICD tels que :
- Analyse : Cette tâche Ansible analyse le code à la recherche de vulnérabilités, de problèmes de sécurité et d'autres problèmes potentiels
Run-migration is designed to execute database migrations for the application Database migrations serve as a method for managing changes to the database schema over time.
- Smoke-test : Cette tâche automatise les tests de fumé, l'objectif de ces tests est de vérifier rapidement que l'application déployée fonctionne correctement après chaque déploiement
The CloudFront update involves updating the CloudFront distribution in our AWS account by changing the origin to a new S3 bucket This operation is part of our blue-green deployment strategy.
Éléments d'infrastructure
Once the automation pipeline was completed, it became essential to develop an infrastructure for the microservices application The following subsections detail the various components that will supply resources to the application The accompanying figure illustrates a graphical representation of the components that make up the infrastructure for deploying the microservices application, with a deeper analysis of each element to follow.
Figure 12 illustrates the components of our AWS infrastructure The networking elements in this section are responsible for enabling network connectivity and facilitating communication among the various components of the infrastructure.
A Virtual Private Cloud (VPC) is a secure and isolated private cloud hosted within a public cloud environment VPC clients can execute code, store data, and host websites, just like they would in a traditional private cloud, but with the infrastructure managed remotely by a public cloud provider.
Availability Zone : Les zones de disponibilité (AZ) sont des centres de données isolés situés dans des régions spécifiques dans lesquelles les services de cloud public sont créés et exploités
Cloud computing companies typically operate multiple availability zones worldwide, ensuring that cloud service clients have a stable connection to services located in the nearest geographic area.
A subnet, or subnet in English, refers to a range of IP addresses within a network that are designated for private use, making them unavailable to the entire network In a Virtual Private Cloud (VPC), these private IP addresses are not accessible via the public internet, in contrast to standard IP addresses that are publicly visible.
Public Subnets : Les sous-réseaux publics disposent d'une route vers l'internet, ce qui permet aux ressources qui s'y trouvent de communiquer directement avec l'internet
Private subnets lack a direct route to the internet, making them ideal for hosting resources that should not be directly accessible online, such as databases and internal services.
In summary, subnets are essential for segmenting and organizing your network within a VPC, playing a critical role in defining network connectivity and resource accessibility.
A NAT Gateway, or Network Address Translation Gateway, enables instances in a private subnet to connect to the Internet or AWS services while preventing the Internet from initiating connections with those instances This fully managed service by Amazon requires no administrative effort, simplifying network management for users.
AWS S3 bucket : S3 est l'abréviation de simple storage service (service de stockage simple en
Franỗais) et il s'agit du service de stockage en nuage d'AWS [28] S3 permet de stocker,
44 d'extraire, d'accéder et de sauvegarder n'importe quelle quantité de données, à tout moment et en tout lieu
S3 est un service de stockage basé sur les objets, ce qui signifie que toutes les données sont stockées sous forme d'objets
Chaque objet a trois composantes principales : le contenu de l'objet, l'identifiant unique de l'objet et les métadonnées de l'objet (y compris son nom, sa taille et son URL)
Amazon S3 a de nombreux cas d'utilisation [29], notamment :
Amazon S3 is the perfect solution for storing application images and videos while ensuring faster rendering Major services like Amazon Prime, Amazon.com, Netflix, and Airbnb utilize Amazon S3 for this purpose By combining Amazon S3 with Amazon CloudFront, users benefit from significantly quicker delivery speeds thanks to CloudFront's edge locations.
Amazon S3 is ideal for the storage and archiving of highly critical data or backups, as it automatically replicates data across different regions, ensuring maximum availability and durability.
- Analyse : Amazon S3 offre une fonctionnalité sophistiquée d'interrogation sur place qui permet d'exécuter des analyses puissantes sur des données qui sont au repos sur S3
- Archivage des données : Vous pouvez stocker et déplacer des To de données d'Amazon S3 vers la solution d'archivage très bon marché et durable d'Amazon Glacier à des fins de conformité
Amazon S3 is an effective solution for hosting static websites, as it stores various static objects With the rise of single-page applications like Angular and ReactJS, relying on traditional web servers for hosting can be expensive S3 provides a static website hosting feature that allows users to utilize their own domain while avoiding high server hosting costs.
CloudFront : Amazon CloudFront est un réseau de diffusion de contenu (CDN) fourni par
Amazon Web Services En utilisant un CDN, les entreprises peuvent accélérer la livraison de
CloudFront is a dedicated CDN solution from AWS that seamlessly integrates with other AWS products, making it an ideal choice for businesses already utilizing AWS services It enables efficient content delivery to users over the Internet while minimizing the load on their own infrastructure CloudFront supports various applications, including web content delivery, streaming media, software update distribution, and API provisioning.
CloudFront plays a crucial role in the Serverless ecosystem, as many websites and applications rely on static files to handle most of their operations, with only certain content sections being customized through serverless APIs These static components include HTML web pages, CSS files, JavaScript code, and media assets like images and videos.
CloudFront s'intègre à d'autres services AWS tels qu’Amazone S3 bucket, EC2 etc
CloudFront can utilize an Amazon S3 bucket as a source to request files before caching them at its edge locations Additionally, CloudFront supports using an Amazon EC2 server or an Elastic Load Balancing endpoint as the origin for files in a CloudFront distribution.
AWS security operates on a shared responsibility model, where AWS ensures the security of the cloud while customers are responsible for securing their data within the cloud To meet security and compliance objectives, various tools and services are available from AWS and other providers Specifically, AWS Security Groups play a crucial role in protecting your Amazon EC2 resources.