Skip to main content

AWS CodeBuild Setup (AWS web Console)



AWS CodeBuild is a service provided by AWS for all the project build requirements. it is a module which will be part of AWS CodePipeline service. 

This below instructions are aimed for beginners and advance configurations are not covered, users are encouraged to explore while understanding the basics. be aware that for each build you will be billed for the resource you use please check the billing calculator.


Step 1:

Search for CodeBuild service in AWS web console and click on it.


Step 2:

select "create build project" for creating new build configuration


here you can enter the name for your project and also select the platform which contains your source code you want to build. available options are shown below.


sample GitHub connection look something like below picture.


Step 3:

now we have to setup our environment to be used for our build


here we can either choose a AWS managed Docker images or our own images for the build. There are different version of images to select from 

PlatformImage identifierDefinition
Amazon Linux 2aws/codebuild/amazonlinux2-x86_64-standard:3.0al2/standard/3.0
Amazon Linux 2aws/codebuild/amazonlinux2-x86_64-standard:4.0al2/standard/4.0
Amazon Linux 2aws/codebuild/amazonlinux2-aarch64-standard:1.0al2/aarch64/standard/1.0
Amazon Linux 2aws/codebuild/amazonlinux2-aarch64-standard:2.0al2/aarch64/standard/2.0
Ubuntu 18.04aws/codebuild/standard:4.0ubuntu/standard/4.0
Ubuntu 20.04aws/codebuild/standard:5.0ubuntu/standard/5.0
Ubuntu 22.04aws/codebuild/standard:6.0ubuntu/standard/6.0
Windows Server Core 2019aws/codebuild/windows-base:2019-1.0N/A
Windows Server Core 2019aws/codebuild/windows-base:2019-2.0N/A


step 4:

adding the actual commands that are used for building and testing the code is added using the "buildspec.yml". this file can be added to source code directly or can be configure in the web console



we can refer to the documentation of buildspec file in aws docs. a sample buildspec file looks like this

phases:
  build: 
    commands:

       - ls -al && mvn clean install 
  post_build:
    on-failure: CONTINUE
    commands:
      - pwd
      - mkdir ./artifacts
      - find ./ -name \*SNAPSHOT.jar -exec cp -R -u -p "{}" ./artifacts \;
      - find ./ -name \*SNAPSHOT.war -exec cp -R -u -p "{}" ./artifacts \;

      - ls ./artifacts
artifacts:
    files:

      - 'artifacts/*'
    discard-paths: no


step 5:

In above steps we have mentioned a step called artifacts, this step is responsible to recognize the files that are needed for deployment. to store these recognized artifacts we have to configure a stage in build project, instructing the place for storing the artifacts.


here we have configured to store the artifacts in AWS S3. Artifacts are stored in zip format as they are easy to handle and also its easy to deploy using AWS codedeploy service.

step 6:

for viewing the build logs there are two ways one is cloud watch and other is storing the logs on S3.



once all these configuration are in place we can click on "create build project" once project is saved successfully you can trigger the build. the build goes through all the below shown stages.



we can view the logs in Build logs tabs. also monitor the resource utilization tab for the resource usage for that build this has impact on your billing.



Happy learning!!😊


 


Comments

Popular posts from this blog

Importance of identifying and tracking errors in DevOps

For almost three years I am curious about tracking errors in my daily work, there are lot of tools like Data dog, Splunk, Dynatrace etc... available as observability tools. it would have been easy to use those, rather I though why not build one. There is an advantage for projects that are small to rely on tools that are built around them. Standards set by industries are important, what if certain configuration don't align with general market standards. I think some developers agree with this approach. The idea of tracking error is not new, but the way we categorise them is unique to different environments. the nature of these categories depends on infrastructure used, network topology, development strategy. A tool designed to handle them all might be bit over engineered for the purpose, because I think the value it creates by reducing our efforts in identifying problems is not more valuable than the product itself. Again this is my opinion on projects that are small but needs quali...

Failed attempt of capturing pictures of the Milkyway

Picture of the night I went star gazing  I got a full frame camera, and I wanted to capture milky way. What I initially understood was it is easy to do it if I have a good equipment, I was proven wrong. I have nikon z5 with a zoom kit lens of f4. I traveled to nearest hills where there is less light pollution, the thing is I should have stayed on top of the hill to get good glimpse of the horizon. In southern hemisphere during the time of November milky way rise and set in south west region, we don't get the full view of it.  The failed attempt I stayed in a valley where they grow coffee beans. The other gamble I made was trusting the weather, this year(2024) post monsoon has extended and lot of clouds can be seeing near horizon. One observation I did was early morning before all the fog drops down the sky will be clear like a still pond this will be early 3 to 4 am in the morning. Next time I will try following what others are doing, get a tripod, also possibly a pollution fi...