Jenkins
Jenkins is an Open Source Continuous Integration and Continuous Delivery tool to buil, deploy and automate any project.
You can train yourself here without any installation on katacoda
Update jenkins on Server
Note
It is better to have an UAT or DEV jenkins...
So you can test the update/upgrade or jenkins.war and plugins
before applying into your PROD Jenkins server.
Manual upgrade - sever based
# Connection on Jenkins server
su - jenkins
cp /usr/lib/jenkins/jenkins.war /tmp/jenkins_bck/
#In the Jenkins server go to jenkins directory:
cd /usr/lib/jenkins/
# Delete existing .war
rm -f /usr/lib/jenkins/jenkins.war
# Donwload and install a new war
wget https://updates.jenkins-ci.org/latest/jenkins.war
service jenkins restart
Jenkins Severs Backup/Synchro Rsync
Note
Case of synchronisation between 2 jenkins master nodes( 1 active(master) + 1 standby)
Users use only a active Master...In meantine files and conf are synchronized on the standby Node
In case of issue with the active, jenkins of stand by node is started and become the principal node
Synchro from active node to the standby node
#!/bin/bash
##file file access to set : rwxr-xr-x 1 jenkins jenkins
##-rlptogD -vz --delete-after --exclude-from=ExclusionRSync
##Script: Backup from Master server to Backup server
##backup <<<<================================* Master
##When you run this script, redirect its output in this file: /var/log/rsync_backup.log
## */5 * * * * su - jenkins -c "/opt/application/amp/jenkins/rsync_backup.sh" > /var/log/rsync_backup.log
master_ip="10.xx.xx.xx"
master_user="jenkins"
master_path="/opt/application/amp/jenkins/"
backup_ip="10.xxx.xxx"
backup_user="jenkins"
backup_path="/opt/application/amp/jenkins/"
rsync_sourcepath="$master_user@$master_ip:$master_path"
rsync_destinationpath=$backup_path
rsync_options="-rlptogD -vz --delete-after --delete-excluded --exclude-from=/opt/application/amp/synchro_jenkins/exclusionrsync.txt"
mail_sender="jmi2_admin@gmail.com"
mail_recipient="toto@gmail.com"
mail_subject="JMI2: Rsync Backup Report"
master_pongurl="https://jenkins.master1.com/metrics/yyyy/ping"
backup_pongurl="https://jenkins.master2.com/metrics/yyyy/ping"
packet_number="3"
ping -c $packet_number $master_ip
status_code=$?
#
if [ "${status_code}" != "0" ]
then
#### Case1: server is Down
echo " "
echo "Master server does not answer to Ping request"
echo " "
echo "Is it Down!"
echo "Please Checkecks it to see what is wrong"
echo " "
echo "***Starting Backup jenkins***"
## Start backup jenkins
service jenkins start
##Alert mail
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
else
##Case2 : server is available
##Serveur is available, we will check Jenkins service Now
echo " "
echo "Master server is OK"
echo " "
pong_master=$(curl -k $master_pongurl)
pong_backup=$(curl -k $backup_pongurl)
if [ "${pong_master}" != "pong" ]
then
#Case2_subcase1: Master Jenkins is down
echo "Master Jenkins does not answer to Pong request"
echo " "
echo "Is it Down!"
echo " "
##Try rsync even
#rsync $rsync_options -e "ssh -i /home/jenkins/.ssh/rsync168_rsa" $rsync_sourcepath $rsync_destinationpath
echo "***Starting Backup jenkins***"
# Start jenkins
service jenkins start
##Inform Team for manual action
echo "Backup Jenkins has been started, It would mean Master jenkins is probably down."
echo " "
echo "Please checks Master server: to see what is wrong"
echo " "
## Alert mail
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
else
##Case2_subcase2: Master Jenkins is available
echo "Master Jenkins is OK"
echo " "
if [ "${pong_backup}" != "pong" ]
then
## Backup Jenkins is not started!
echo "Backup Jenkins is not running!"
echo "Launching Rsync task!"
echo " "
##Launch Rsync
rsync $rsync_options -e "ssh -i /home/jenkins/.ssh/rsync168_rsa" $rsync_sourcepath $rsync_destinationpath
echo " "
echo "Rsync task has finished!"
echo " "
## Alert mail
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
else
echo "Backup Jenkins is running!"
echo "Nothing to do in this case"
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
fi
fi
fi
exit 0
From former standby Node to former active Node
(come back to the early active node)
#!/bin/bash
##file file access to set : rwxr-xr-x 1 jenkins jenkins
##-rlptogD -vz --delete-after --exclude-from=ExclusionRSync
##Script: Backup from Master server to Backup server
##backup <<<<================================* Master
##When you run this script, redirect its output in this file: /var/log/rsync_backup.log
## */5 * * * * su - jenkins -c "/opt/application/amp/jenkins/rsync_backup.sh" > /var/log/rsync_backup.log
master_ip="10.xx.xx.xx"
master_user="jenkins"
master_path="/opt/application/amp/jenkins/"
backup_ip="10.xxx.xxx"
backup_user="jenkins"
backup_path="/opt/application/amp/jenkins/"
rsync_sourcepath="$master_user@$master_ip:$master_path"
rsync_destinationpath=$backup_path
rsync_options="-rlptogD -vz --delete-after --delete-excluded --exclude-from=/opt/application/amp/synchro_jenkins/exclusionrsync.txt"
mail_sender="jmi2_admin@gmail.com"
mail_recipient="toto@gmail.com"
mail_subject="JMI2: Rsync Backup Report"
master_pongurl="https://jenkins.master1.com/metrics/yyyy/ping"
backup_pongurl="https://jenkins.master2.com/metrics/yyyy/ping"
packet_number="3"
ping -c $packet_number $master_ip
status_code=$?
#
if [ "${status_code}" != "0" ]
then
#### Case1: server is Down
echo " "
echo "Master server does not answer to Ping request"
echo " "
echo "Is it Down!"
echo "Please Checkecks it to see what is wrong"
echo " "
echo "***Starting Backup jenkins***"
## Start backup jenkins
service jenkins start
##Alert mail
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
else
##Case2 : server is available
##Serveur is available, we will check Jenkins service Now
echo " "
echo "Master server is OK"
echo " "
pong_master=$(curl -k $master_pongurl)
pong_backup=$(curl -k $backup_pongurl)
if [ "${pong_master}" != "pong" ]
then
#Case2_subcase1: Master Jenkins is down
echo "Master Jenkins does not answer to Pong request"
echo " "
echo "Is it Down!"
echo " "
##Try rsync even
#rsync $rsync_options -e "ssh -i /home/jenkins/.ssh/rsync168_rsa" $rsync_sourcepath $rsync_destinationpath
echo "***Starting Backup jenkins***"
# Start jenkins
service jenkins start
##Inform Team for manual action
echo "Backup Jenkins has been started, It would mean Master jenkins is probably down."
echo " "
echo "Please checks Master server: to see what is wrong"
echo " "
## Alert mail
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
else
##Case2_subcase2: Master Jenkins is available
echo "Master Jenkins is OK"
echo " "
if [ "${pong_backup}" != "pong" ]
then
## Backup Jenkins is not started!
echo "Backup Jenkins is not running!"
echo "Launching Rsync task!"
echo " "
##Launch Rsync
rsync $rsync_options -e "ssh -i /home/jenkins/.ssh/rsync168_rsa" $rsync_sourcepath $rsync_destinationpath
echo " "
echo "Rsync task has finished!"
echo " "
## Alert mail
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
else
echo "Backup Jenkins is running!"
echo "Nothing to do in this case"
mail -v -s "${mail_subject}" -r "${mail_sender}" "${mail_recipient}" < /opt/application/amp/synchro_jenkins/rsync_log.txt
fi
fi
fi
exit 0
Exclude files
jenkins.model.JenkinsLocationConfiguration.xml
jobs/*/*/lastFailedBuild
jobs/*/*/lastUnstableBuild
jobs/*/*/lastUnsuccessfulBuild
jobs/Update_Local_Parameter/builds
logs/*
projects/*
updates/*
workspace/*
backup_restore.log
backup_restore.sh
rsync_backup.sh
exclusionrsync.txt
rsync_log.txt
jenkins_bck
backupof_backup_restore.sh
.git/
Some Scripts
Shared library - pipeline : https://www.jenkins.io/doc/book/pipeline/shared-libraries/
utility.groovy
def iniGetter(mymap, mylist, myhost){
def objects = mylist
def host = myhost
def liste = []
def default_list = []
objects.each { obj ->
if(mymap.containsKey(obj)){
liste = mymap.get(obj)
if(! liste.contains(host)){
liste.add(host)
mymap.put(obj, liste)
}
liste = default_list
}else{
mymap.put(obj, [host])
}
} //fin obj
} //fin func
def printMap(mymap, myname){
def content = ""
if(mymap.size() >= 1){
content = content +"\n####### field: " + myname + "\n"
mymap.keySet().each{
content = content + "[" + it +"]" + "\n"
def temp_list = mymap.get(it)
if(! temp_list.isEmpty()){
temp_list.each { elt ->
content = content + elt + "\n"
} //each temp_list
} // if empty
content = content + "\n"
} //each map
} //if size
return content
}
def printChild(myname, mylist){
def content = ""
content = content +"\n####### " + myname + " children\n"
content = content + "[" + myname +":children]" + "\n"
mylist.each { child ->
content = content + child + "\n"
}
content = content + "\n"
return content
}
def printWin(myli){
def cont= ""
def windows_os=["pegase", "pegase2012", "pegase2013","windows_server_2008", "windows_server_2016"]
def var_ct = '''
[windows:vars]
ansible_connection=winrm
ansible_winrm_scheme=https
ansible_port=5986
ansible_winrm_server_cert_validation=ignore
validate_certs=false
ansible_winrm_transport=ntlm
ansible_winrm_operation_timeout_sec=60
ansible_winrm_read_timeout_sec=70
ansible_become=false
'''
def comp_l = windows_os.intersect(myli)
if(! comp_l.isEmpty()){
cont=cont + printChild("windows", comp_l) + "\n" + var_ct
}
return cont
}
////////////Docker fonction
//equivalent du job dans jenkins
def DockerAnsible(InstallationStep, modules){
////modules = "compo1,compo2,compox"
//cleanWs()
def Basicat = getBasicat()
def Platform = getPlatform()
def HalfPlatform = getHalfPlatform()
if (!TemplateType || TemplateType == null){ def TemplateType = "newversion" }
//check parameters
if(InstallationStep == null || modules == null){
error('Please set all parameters before launching this job')
}
//content: pour stocker les extra vars
def content= "Env: " + Platform + "\n"
content = content + "PlaybookAction: " + InstallationStep + "\n"
if(HalfPlatform == "all"){
content = content + "half_platform: " + HalfPlatform+ "\n"
}else{
content = content + "half_platform: half_platform" + HalfPlatform+ "\n"
}
if(modules !="null"){
def liste= modules.split(",")
for(int i=0; i < liste.size(); i++){
if(liste[i].trim() == "ora-api"){
content = content + "oraapi_sources: 1"+ "\n"
}else{
content = content + liste[i].trim() + "_sources: 1"+ "\n"
}
}
}else{
error("Argument modules value is wrong!")
}
//
def TemplateName
if(TemplateType == "newversion"){
TemplateName = NewTemplate(Basicat, Platform, InstallationStep)
}else{ TemplateName = OldTemplate(Basicat, Platform, InstallationStep) }
println("TemplateType: " + TemplateType)
println("template name: " + TemplateName)
println("extras variables: \n" + content)
//call tower
println("call tower")
env['jobtemplate'] = TemplateName
wrap([$class: 'AnsiColorBuildWrapper', colorMapName: "xterm"]) {
ansibleTower(
towerServer: 'AWX EIN PROD',
jobTemplate: jobtemplate,
importTowerLogs: true,
removeColor: false,
extraVars: content
)
}
}
def OldTemplate(basicat, platf, step){
def TemplateName
switch(step) {
case 'install':
TemplateName = basicat+"_"+platf+"_deployment"
break;
case 'clean':
TemplateName = basicat+"_"+platf+"_clean"
break;
//rollback et validate
default:
TemplateName = basicat+"_"+platf+"_"+ step
break;
}
return TemplateName
}
def NewTemplate(basicat, platf, step){
def TemplateName
switch(step) {
case 'install':
TemplateName = basicat+"_"+platf+"_deployment"
break;
//clean, rollback et validate
default:
TemplateName = basicat+"_"+platf+"_manage"
break;
}
return TemplateName
}
//////////////////////////
def parsJson(json){
def slurper = new groovy.json.JsonSlurper()
return slurper.parseText(json)
}
def processStatusMail(String name, String status, String url, String duration){
//create format for email templates
def color = mapColorMail().get(status)
return "<tr><td>${name}</td><td><font color=#${color}>${status}</font></td><td>${duration}</td><td>${url}</td></tr>"
}
def mapstar(){
//Map for star of status job
return ["AM":"star-silver","UAT":"star-silver","MNT":"star-orange","E1":"star-purple","E2":"star-orange","BENCH":"star-red-e","PEXP":"star-blue-e","PROD":"star-gold"]
}
def mapColorMail(){
//Map for color of status job
return ['SUCCESS': '7ED529', 'FAILURE': 'FF6666', 'UNSTABLE': 'FFD700', 'ABORTED': 'ACACAC']
}
def getBasicat() {
//get app basicat
job = JOB_NAME.split('_')
return job.first().toLowerCase()
}
def getPlatform() {
//get app platform
job = JOB_NAME.split('_')
return job.last().toLowerCase().trim()
}
def getOriginBranch() {
//get OriginBranch if existing
if (params.OriginBranch.isEmpty() || !OriginBranch || OriginBranch == null || OriginBranch.contains("\${")) {
return false
}else{
return OriginBranch
}
}
def getOriginOrRevision(){
//get origin
// origin is Revision or variable OriginBranch
originBranch = getOriginBranch()
if (!originBranch) {
origin = getMapParameters().get('Revision')
if(!origin){
error("Sorry no variable commit Revision tag")
}else{
return origin
}
}else{
return originBranch
}
}
def getModulesVersions() {
//get ModulesVersion
if (params.ModulesVersions.isEmpty() ||!ModulesVersions || ModulesVersions == null || ModulesVersions.contains("\${")) {
modulesVersions = getMapParameters().get('ModulesVersions')
if(!modulesVersions){
error("Sorry no variable ModulesVersions")
}else{
return modulesVersions
}
}else{
return ModulesVersions
}
}
def getModulesInstall(ModulesVersions){
//Give Map with modules version
//Need the json with the ModulesVersions
def modulesJson = parsJson(ModulesVersions)
def map = [:]
modulesJson.each{
if(it.Install){
map.put(it.Name,it.Version.toLowerCase())
}
}
return map
}
//revoir cette fonction quand on l'appellera
def getModulesOtherActionInstall(ModulesVersions){
def modulesJson = parsJson(ModulesVersions)
def map = [:]
modulesJson.each{
if(it.Install){
if(!(it.OtherAction.isEmpty())){
otheractionList.add(it.OtherAction.get(0))
}else{
otheractionList.add("null")
}
}
}
return otheractionList
}
def getOnlyVersions(ModulesVersions){
map = getModulesInstall(ModulesVersions)
def moduleList = []
map.each{ k, v ->
moduleList.add(v)
}
return versionList
}
def getOnlyModules(ModulesVersions){
map = getModulesInstall(ModulesVersions)
def moduleList = []
map.each{ k, v ->
moduleList.add(k)
}
return moduleList
}
def getOtherAction(ModulesVersions){
def modulesJson = parsJson(ModulesVersions)
def otheractionList = []
modulesJson.each{
if(it.Install){
if( it.OtherAction.size() > 0 ){
otheractionList.add(it.OtherAction.get(0))
}else{
otheractionList.add("null")
}
}
}
return otheractionList
}
def getHalfPlatform() {
//get HalfPlatform and check if is interger
if (params.HalfPlatform.isEmpty()) {
error("Sorry HalfPlatform was mandatory variable please set it")
}else{
if(HalfPlatform.toString().isInteger() || HalfPlatform == "all"){
return HalfPlatform
}else{
error("The HalfPlatform wasn't a interger")
}
}
}
def getHalfplatformString(){
return "half_platform"
}
def getUrlGitEin(appli){
//need basicat of application and return the git url
return GITLAB_EIN_APPLI+'/'+appli+'.git'
}
def getMapParameters(){
//get map of all parameter from artifact file
//get JobName and JobNumber
try {
step([$class: 'CopyArtifact',
projectName: JobName,
selector: [$class: 'SpecificBuildSelector', buildNumber: JobNumber]])
}catch(none){
error("Sorry we are not able to get the module version you want install. Either you have promote fail build or you have no module set.")
}
def propfile = readProperties file: 'parameters.properties'
def map = [:]
propfile.each{ k, v ->
map.put(k,v)
}
return map
}
def createFileCallAWXEIN(credential){
//create config file for call AWX API
credentialUse = ""
switch (credential) {
case 'jenkins':
credentialUse = "jenkins_awx_ein_prod_qNOJ1qXyxUxV9USr9JMv"
break
case 'awx':
credentialUse = "jenkins_awx_ein_prod_awx"
break
default:
error("Sorry the credential to use doesn't exist.")
break
}
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: credentialUse,
usernameVariable: 'TowerLogin', passwordVariable: 'TowerPassword']]) {
writeFile file: '.tower_cli.cfg', text: "host=https://${AWX_EIN}\nusername = "\
+TowerLogin+"\npassword = "+TowerPassword
}
sh(script: '#!/bin/sh -e\nchmod 500 .tower_cli.cfg', returnStdout: false)
}
def cancelAWXJob(credential){
//For set credential please see function createFileCallAWXEIN
node('ansible'){
deleteDir()
createFileCallAWXEIN(credential)
job = getBasicat()+"_"+getPlatform()+"_deployment"
def json = sh(script: "#!/bin/sh -e\ntower-cli job list --status=running -f json", returnStdout: true).trim()
def idCancel = null
parsJson(json).results.each{
if(it.summary_fields.job_template.name == job){
idCancel= it.id
}
}
if(idCancel != null){
sh(script: "#!/bin/sh -e\ntower-cli job cancel "+idCancel, returnStdout: true)
}
}
}
def checkPackageNexusEIN(tabl){
//check if a package have this tag
//warn when you call the nexus search fonction the group should be on the same order
//than the tree on nexus ex: application, basicat, component, and version
def urlSource = []
group = "%2Fapplications"
tabl.each{
group = group+"%2F"+it
}
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'api-nexus-credential-caf685bb7c',
usernameVariable: 'userNexus', passwordVariable: 'pwdNexus']]) {
json = sh(script: "#!/bin/sh -e\ncurl -u "+userNexus+":"+pwdNexus+" -k -X GET '"+EIN_NEXUS_SEARCH+EIN_REPO_HTTP+"&group="+group+"' 2>/dev/null", returnStdout: true).trim()
def urljson = parsJson(json)
//check if the package existing on nexus
if(urljson.items.empty){
println("âš Sorry, no package found on nexus. âš \nPackage: " + tabl)
return null
} else{
urljson.items.each{
if(!it.name.contains(PROMOTED_PACKAGE_NAME)){
urlSource.add(it.assets.downloadUrl.first())
}
}
if(urlSource.isEmpty()){
println("âš Sorry, no package found on nexus. âš \nPackage: " + tabl)
return null
}else{
return urlSource
}
}
}
}
//get MR label
def getMR() {
if(params.MR == null || params.MR.toString().contains('${')){
return false
}else{
return params.MR.toLowerCase()
}
}
//get campaign name
def getCampaign() {
if(params.TestCampaign.isEmpty() || TestCampaign.toString().contains('${')){
return false
}else{
return params.TestCampaign.toLowerCase()
}
}
//get server
def getServer() {
def platform = getPlatform()
def halfPlatform = getHalfPlatform()
job = platform + halfPlatform
return job.toLowerCase()
}
//get server
def getTestType() {
if(params.Tests.isEmpty() || Tests.toString().contains('${')){
return false
}else{
return params.Tests.toLowerCase()
}
}
def getJobName(){
//get variable JobName
if (!JobName) {
return false
}else{
return JobName
}
}
def getJobNumber(){
//get variable JobNumber
if (!JobNumber) {
return false
}else{
return JobNumber
}
}
def getMailTag(){
if (params.MailTag.isEmpty() || !MailTag || MailTag == null || MailTag.contains("\${")) {
return ""
}else{
return MailTag
}
}
def getMailRecipient(){
if (params.MailRecipient.isEmpty() || !MailRecipient || MailRecipient == null || MailRecipient.contains("\${")) {
return ""
}else{
return MailRecipient
}
}
def deleteParameter(){
//should be executed on master slave
def jobName = JOB_NAME
def job = Jenkins.instance.getItem(jobName)
def parameter = job.getBuildByNumber(BUILD_ID.toInteger()).getAction(hudson.model.ParametersAction)
job.getBuildByNumber(BUILD_ID.toInteger()).actions.remove(parameter)
}
def setVaultAWX(basicat,env,tag,pwd){
try{
json = ""
json = sh(script: '#!/bin/sh -e\ntower-cli credential get -n '+basicat+'_'+env+'_vault -f json', returnStdout: true).trim()
vaultid = parsJson(json).id
input = '\'{"vault_password": "'+pwd+'", "vault_id": "'+tag+'"}\''
json = sh(script: '#!/bin/sh -e\ntower-cli credential modify --inputs='+input+' '+vaultid+' 2>/dev/null', returnStdout: true).trim()
}catch(Exception e1){
error("Sorry vault credential doesn't exist on AWX")
}
}
awx.groovy
import groovy.json.*
import hudson.model.*
import java.util.*;
//recovery env var
def env = System.getenv();
def workspace = env['WORKSPACE'];
def basicat = env['Basicat'];
def awxuser = env['awxuser'];
def awxtoken = env['awxtoken'];
def slurper;
def awxurl="https://awxexampleom"
def inventory = new File(workspace +"/" + basicat +"_inventory.yml" )
def temp_inventory = new File(workspace +"/" + basicat +"_tempinventory.txt" )
def descap = " " + " "
def tescap = " " + " " + " " + " "
def jdata
def outfile
def cmdout
def getchildren(parent){
def child;
def childjson;
def outputlist = []
def slur
child = "tower-cli group list -a --parent" + " " + parent + " -f json"
childjson = child.execute().text;
slur = new JsonSlurper().parseText(childjson)
slur.results.each{
//println "it: " + it.name + "\n"
outputlist.add(it.name)
}
return outputlist;
}
def gethost(compo_list){
def hostchild;
def hostchildjson
def hostmap = [:]
def hostslur
for(int j=0; j < compo_list.size(); j++){
hostchild = "tower-cli host list -a --group" + " " + compo_list.get(j) + " -f json"
hostchildjson = hostchild.execute().text;
hostslur = new JsonSlurper().parseText(hostchildjson)
hostslur.results.each{
//println "it: " + it.name + "\n"
hostmap.put(it.name,it.id)
}
}
return hostmap;
}
def sub_system = ['platon7', 'platon6','pegase']
def platform_list = ['half_platform1', 'half_platform2', 'half_platform3']
def environments_all = getchildren("environments")
def components_list = getchildren(basicat)
def temp_middleware = getchildren("middlewares")
def operating_system = []
def middleware_list = []
def environments_list = []
def midfor_list
def hostname_list = []
def hostid_list = []
//operating_system = linux_system.plus(win_system)
sub_system.each {
operating_system= operating_system.plus(getchildren(it))
}
for(int i=0; i<temp_middleware.size(); i++){
midfor_list = getchildren(temp_middleware.get(i))
middleware_list = middleware_list.plus(midfor_list)
}
def hosts_map=gethost(components_list)
//println "hosts_map: " + hosts_map
hosts_map.each{ k, v ->
if(hostname_list.contains("${k}") == false){ hostname_list.add("${k}") }
if(hostid_list.contains("${v}") == false){ hostid_list.add("${v}") }
}
println(" ")
/*
//println "\nenvironments_list: " + environments_list
println "\ncomponents_list: " + components_list
//println "\nlinux_system: " + linux_system
//println "win_system: " + win_system
println "\noperating_system: " + operating_system
println "\nmiddleware: " + middleware_list
println "\nplatform_list: " + platform_list
println "\nhostname_list: " + hostname_list
println "\nhostid_list: " + hostid_list
*/
println(" ")
for(int y=0; y<hostid_list.size(); y++){ //gros for
def hostdata_list = []
def cmd = "curl -k -X GET --fail --silent --user" + " " + awxuser + ":" + awxtoken + " "
cmd = cmd + awxurl +"/api/v2/hosts/" + hostid_list.get(y) + "/all_groups/?format=json" + " -o " + workspace +"/" + hostname_list.get(y) + ".json"
cmdout = cmd.execute()
cmdout.waitFor()
outfile = new File(workspace +"/" + hostname_list.get(y) + ".json").text;
jdata = new JsonSlurper().parseText(outfile)
jdata.results.each{
//println "it: " + it.name
hostdata_list.add(it.name)
} //fin jdata result
//println ("\n")
environments_all.each { platf ->
if (hostdata_list.contains(platf) == true) {
if(environments_list.contains(platf) == false){ environments_list.add(platf)}
temp_inventory.append("\n\n" + hostname_list.get(y) + ":")
temp_inventory.append("\n" + descap + "env: " + platf)
temp_inventory.append("\n" + descap + "platform:")
platform_list.each { half ->
if (hostdata_list.contains(half) == true) { temp_inventory.append("\n" + tescap + "- " + half) }
}
temp_inventory.append("\n" + descap + "component:")
components_list.each { compos ->
if (hostdata_list.contains(compos) ==true) { temp_inventory.append("\n" + tescap + "- " + compos) }
}
operating_system.each { opera ->
if (hostdata_list.contains(opera) == true) { temp_inventory.append("\n" + descap + "operating_system: " + opera) }
}
temp_inventory.append("\n" + descap + "middleware:")
middleware_list.each { midd ->
if (hostdata_list.contains(midd)) { temp_inventory.append("\n" + tescap + "- " + midd) }
}
println ("")
}//fin if platform
}
} //gros for
//construction inventaire basicat_inventory.yml
inventory.append("---")
inventory.append("\nenvironments:")
environments_list.each{
inventory.append("\n"+ descap + "- " + it)
}
inventory.append("\n\ncomponents:")
components_list.each{
inventory.append("\n"+ tescap + "- " + it)
}
inventory.append("\n\nhosts:")
hostname_list.each{
inventory.append("\n"+ tescap + "- " + it)
}
inventory.append(temp_inventory.text)
entry.groovy
def call(Map pipelineParams) {
pipeline {
parameters {
string(name: 'Basicat', description: 'Please enter here the basicat of your application')
}
agent { label 'master'}
stages {
stage('Creating branches'){
steps{
script{
cleanWs()
basicat= Basicat.toLowerCase()
def groupGit = "application"
def createfrom = "g00r00c00"
def inventorybranch = "master" //branch where there are inventory.yml
def urlGit = utility.getUrlGitEin(basicat)
def repoid
def repobranch
def existing_branch = []
def branch_list //in inventory.yml
def ymlcreate = " "
//check if the repository exists
withCredentials([
usernamePassword(credentialsId: 'gitlab-xxxf', usernameVariable: 'UserGitlab',
passwordVariable: 'token')]) {
sh(script: '#!/bin/sh -e\n curl --header "PRIVATE-TOKEN: '\
+token+'" -X GET "https://'+GITLAB_EIN_FQDN+'/api/v4/projects?search='\
+basicat+'" > ifproject.yml 2>/dev/null')
}
ymlfile = readYaml file: 'ifproject.yml'
ymlfile.each{
if(it.path_with_namespace.toString()!=groupGit+'/'+basicat){
println it.path_with_namespace
error( "Repository " + groupGit+"/"+basicat + " doesn't exit ")
}
repoid = it.id
repobranch = it._links.repo_branches
}
//get env from branch master inventory.yml
checkout ( [$class: 'GitSCM',branches: [[name: inventorybranch]],
userRemoteConfigs: [[credentialsId: 'gitlab-xxxf',url: urlGit]]])
ifinventory = fileExists 'inventory.yml'
if(ifinventory){
ymlinventory = readYaml file: 'inventory.yml'
branch_list = ymlinventory.environments
cleanWs()
}else{
cleanWs()
error("file inventory.yml not found on branch " + inventorybranch )
}
//Get existing branch
withCredentials([
usernamePassword(credentialsId: 'gitlab-7xxxf', usernameVariable: 'UserGitlab',
passwordVariable: 'token')]) {
sh(script: '#!/bin/sh -e\n curl --header "PRIVATE-TOKEN: '\
+token+'" -X GET '+ repobranch+ ' > existingbranch.yml 2>/dev/null')
}
ymlbranch = readYaml file: 'existingbranch.yml'
ymlbranch.each{
existing_branch.add(it.name)
}
println "Existing branches: " + existing_branch
if( !existing_branch.contains(createfrom) ){error( "The branch " + createfrom + " doesn't exit.." )}
//creating branches
withCredentials([
usernamePassword(credentialsId: 'gitlab-7xxxxf', usernameVariable: 'UserGitlab',
passwordVariable: 'token')]) {
println("\n Start creating environment branches\n Environments list: " + branch_list + "\n")
for(branch in branch_list){
if(!existing_branch.contains(branch)){
sh(script: '#!/bin/sh -e\n curl --header "PRIVATE-TOKEN: '\
+token+'" --request POST "https://'+GITLAB_EIN_FQDN+'/api/v4/projects/'\
+repoid+ '/repository/branches?branch=' + branch + '&ref=' + createfrom + '"> createbranch.yml 2>/dev/null')
ymlcreate = readYaml file: 'createbranch.yml'
if(ymlcreate.name != branch){ error("fail to create branch " + branch) }else{ println "Branch " + ymlcreate.name + " has been created"}
}else{ println 'branch ' + branch + ' already exists'}
}
} //withCre
} //fin script
} //fin step
} //fin stage('Creating branches')
} //fin stages
} //fin pipeline
} //fin call
entry_roles.groovy
import com.michelin.cio.hudson.plugins.rolestrategy.*
import hudson.*
import hudson.model.*
import hudson.security.*
import hudson.security.Permission.*
import jenkins.*
import java.util.*
def call() {
println("Starting ...")
def ReleasePermissions = [
"hudson.model.Item.Build",
"hudson.model.Item.Cancel",
"hudson.model.Item.Read",
"hudson.model.Item.Workspace",
"hudson.model.Run.Replay",
"hudson.plugins.promoted_builds.Promotion.Promote"
]
//the same in ampv2 doc
def MoeExecPermissions = ReleasePermissions
def MoeReadPermissions = [
"hudson.model.Item.Read",
"hudson.model.Item.Workspace"
]
def basicat = Basicat.toUpperCase()
def relea_role = basicat
def release_pattern = basicat + "_(?!.*PEXP.*)(?!.*PROD.*).*"
def moeexec_role = basicat + "_MOE"
def moeexec_pattern = basicat + "_INIT_.*"
def moeread_role = basicat + "_MOE_READ"
def moeread_pattern = basicat + "_.*"
def basicatgroup = "GA_AMQ_MOE_" + basicat
def basicatgroup_perm = [
"DELIVERY_NEXUS",
"DOCKER_SUBJOBS",
"BREAKPOINT",
"JOB_PIPELINE"
]
basicatgroup_perm.add(moeexec_role)
basicatgroup_perm.add(moeread_role)
def existing_role = []
def failtoassign = []
def authStrategy = Jenkins.instance.getAuthorizationStrategy()
if(authStrategy instanceof RoleBasedAuthorizationStrategy){
// Get all project type roles: "
authStrategy.roleMaps.projectRoles.getRoles().each{ rol ->
existing_role.add(rol.getName())
}
//release
if(!existing_role.contains(relea_role)){
utility.CreateRole(authStrategy, relea_role, release_pattern, ReleasePermissions)
} else { println relea_role + " already exists in Jenkins matrix" }
//moeexec
if(!existing_role.contains(moeexec_role)){
utility.CreateRole(authStrategy, moeexec_role, moeexec_pattern, MoeExecPermissions)
} else { println moeexec_role + " already exists in Jenkins matrix" }
//moeread
if(!existing_role.contains(moeread_role)){
utility.CreateRole(authStrategy, moeread_role, moeread_pattern, MoeReadPermissions)
} else { println moeread_role + " already exists in Jenkins matrix" }
//assign roles
println "Assigning roles"
Role assignedrole;
basicatgroup_perm.each{
assignedrole = authStrategy.roleMaps.projectRoles.getRole(it)
if(assignedrole != null){
authStrategy.roleMaps.projectRoles.assignRole(assignedrole, basicatgroup)
println "Role ${it} has been assigned \n"
}else{
println("Cannot assign role ${it}: doesn't exit")
failtoassign.add(it)
}
}
//save modifications
Jenkins.instance.save()
if(failtoassign.size() > 0){ error("Fails to assign some roles: ${failtoassign}")}
} else {
error("Not found Role Strategy Plugin")
}
} //fin call
set_users.groovy
import com.michelin.cio.hudson.plugins.rolestrategy.*
import hudson.*
import hudson.model.*
import hudson.security.*
import hudson.security.Permission.*
import jenkins.*
import java.util.*
# To manage RBAC on admin area to avoid adding access manually
def call() {
println("Starting ...")
def ReleasePermissions = [
"hudson.model.Item.Build",
"hudson.model.Item.Cancel",
"hudson.model.Item.Read",
"hudson.model.Item.Workspace",
"hudson.model.Run.Replay",
"hudson.plugins.promoted_builds.Promotion.Promote"
]
//the same in ampv2 doc
def MoeExecPermissions = ReleasePermissions
def MoeReadPermissions = [
"hudson.model.Item.Read",
"hudson.model.Item.Workspace"
]
def basicat = Basicat.toUpperCase()
def relea_role = basicat
def release_pattern = basicat + "_(?!.*PEXP.*)(?!.*PROD.*).*"
def moeexec_role = basicat + "_MOE"
def moeexec_pattern = basicat + "_INIT_.*|" + basicat + "_Docker"
def moeread_role = basicat + "_MOE_READ"
def moeread_pattern = basicat + "_.*"
def basicatgroup = "GA_AMQ_MOE_" + basicat
def basicatgroup_perm = [
"DELIVERY_NEXUS",
"DOCKER_SUBJOBS"
]
basicatgroup_perm.add(moeexec_role)
basicatgroup_perm.add(moeread_role)
def existing_role = []
def failtoassign = []
def authStrategy = Jenkins.instance.getAuthorizationStrategy()
if(authStrategy instanceof RoleBasedAuthorizationStrategy){
// Get all project type roles: "
authStrategy.roleMaps.projectRoles.getRoles().each{ rol ->
existing_role.add(rol.getName())
}
//release
if(!existing_role.contains(relea_role)){
CreateRole(authStrategy, relea_role, release_pattern, ReleasePermissions)
} else { println relea_role + " already exists in Jenkins matrix" }
//moeexec
if(!existing_role.contains(moeexec_role)){
CreateRole(authStrategy, moeexec_role, moeexec_pattern, MoeExecPermissions)
} else { println moeexec_role + " already exists in Jenkins matrix" }
//moeread
if(!existing_role.contains(moeread_role)){
CreateRole(authStrategy, moeread_role, moeread_pattern, MoeReadPermissions)
} else { println moeread_role + " already exists in Jenkins matrix" }
//assign roles
println "Assigning roles"
Role assignedrole;
basicatgroup_perm.each{
assignedrole = authStrategy.roleMaps.projectRoles.getRole(it)
if(assignedrole != null){
authStrategy.roleMaps.projectRoles.assignRole(assignedrole, basicatgroup)
println "Role ${it} has been assigned \n"
}else{
println("Cannot assign role ${it}: doesn't exit")
failtoassign.add(it)
}
}
//save modifications
Jenkins.instance.save()
if(failtoassign.size() > 0){ error("Fails to assign some roles: ${failtoassign}")}
} else {
error("Not found Role Strategy Plugin")
}
} //fin call
def CreateRole(authStrategy, rolename, rolepattern, rolepermissions){
Role jenkinsrole;
Set <Permission> RolePermissionset = new HashSet<Permission>();
rolepermissions.each { p ->
def permission = Permission.fromId(p)
if (permission != null) {
RolePermissionset.add(permission)
} else {
println("Error with this permission ${p}")
}
}
//println "\nCreating Role object with " + rolename
jenkinsrole = new Role(rolename, rolepattern, RolePermissionset)
println "\nCreating role " + rolename + " in jenkins matrix"
authStrategy.roleMaps.projectRoles.addRole(jenkinsrole)
Jenkins.instance.save()
println "Role" + rolename + " has been created"
}
ini.groovy
def call(){
def basicat = Basicat.toLowerCase()
def urlGit = utility.getUrlGitEin(basicat)
def inventorybranch = "master"
//definition des map
Map<String, List<String>> env_map = new HashMap<String, List<String>>();
Map<String, List<String>> plat_map = new HashMap<String, List<String>>();
Map<String, List<String>> compo_map = new HashMap<String, List<String>>();
Map<String, List<String>> mid_map = new HashMap<String, List<String>>();
Map<String, List<String>> os_map = new HashMap<String, List<String>>();
git([url: urlGit, credentialsId: 'gitlab-xxxf',branch: inventorybranch])
ifinventory = fileExists 'inventory.yml'
if(ifinventory){
ymlinventory = readYaml file: 'inventory.yml'
env_list = ymlinventory.environments
host_list = ymlinventory.hosts
comp_list = ymlinventory.components
env_list.each { env ->
env_map.put(env, [])
} //fin env
def serv
host_list.each { host ->
serv = ymlinventory."${host}"
//adding env in map
def envlist=[serv.env]
utility.iniGetter(env_map, envlist, host)
//adding halplatform in map
platf = serv.platform
utility.iniGetter(plat_map, platf, host)
//adding component in map
component = serv.component
utility.iniGetter(compo_map, component, host)
//adding middleware in map
if(serv.middleware){
utility.iniGetter(mid_map, serv.middleware, host)
}
//adding os in map
if(serv.operating_system){
def oslist = [serv.operating_system]
utility.iniGetter(os_map, oslist, host)
}
} //fin host
def all_ct=""
def env_ct = utility.printMap(env_map, "env")
def env_ch = utility.printChild("env", env_list)
all_ct= all_ct + env_ct + env_ch
def half_ct= utility.printMap(plat_map, "halfplatform")
all_ct= all_ct + half_ct
def comp_ct= utility.printMap(compo_map, "component")
def comp_ch = utility.printChild("component", comp_list)
def comp_basicat = utility.printChild(basicat, comp_list)
all_ct= all_ct + comp_ct + comp_ch + comp_basicat
if(mid_map.size() >=1){
def mid_ct= utility.printMap(mid_map, "middleware")
def mid_ch = utility.printChild("middleware", mid_map.keySet())
all_ct= all_ct + mid_ct + mid_ch
}
if(os_map.size() >=1){
def os_ct
//when only one OS for all host
os_ct= utility.printMap(os_map, "operating_system")
def os_ch = utility.printChild("operating_system", os_map.keySet())
def ifwin= utility.printWin(os_map.keySet())
all_ct= all_ct + os_ct + os_ch + ifwin
}
//print inventory.ini future data
println(all_ct)
//write inventory.ini and push it into master branch
/*
writeFile file: 'inventory.ini', text: all_ct
sh(script: "git add inventory.ini 2>/dev/null", returnStdout: true)
sh(script: "git commit -m 'Creating inventory.ini ${JOB_NAME}:${BUILD_NUMBER}' 2>/dev/null",returnStatus: true)
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'gitlab-716e11820f',usernameVariable: 'GitLogin', passwordVariable: 'GitPassword']]) {
sh(script: "git config remote.origin.url https://"+GitLogin+":"+GitPassword+"@"+urlGit.split('://')[1], returnStdout: false)
sh(script: "git push --set-upstream origin --all", returnStdout: false)
}
*/
cleanWs()
}else{
cleanWs()
error("file inventory.yml not found on branch " + inventorybranch )
}
}
supervision.groovy
pipeline{
agent any
stages{
stage('Supervision'){
steps{
script{
def url = Url
def match = Matching
def content = " "
try {
content = url.toURL().getText()
println("\nUrl is valid...")
}
catch (Exception e){
println("\nThe url: " + url + " is not accessible\n")
println("\nException ouput: \n" + e)
}
///continue when url is accessible
if (content.contains(match)){
println("\nFound the expression: " + match)
}else{
error("\nNot found expression: " + match + "\n")
}
}
}
}
}
}
test.groovy
def call(Map pipelineParams) {
def Basicat = pipelineParams.Basicat ?: "null"
def colorMap = ['SUCCESS': '7ED529', 'FAILURE': 'FF6666', 'UNSTABLE': 'FFD700', 'ABORTED': 'ACACAC']
def TeamList = "toto@gmailcom"
def Application = ""
def Platform = ""
def stagesStep = ""
def components = ""
def descriptionList = []
def moduleList = []
def versionList = []
//cleanbefore or breakpoints for each module
def otheractionList = []
def Branch = ""
def jobSource = ""
def tag = ""
def val_list = []
def rol_list = []
//def Revision = ""
def result = ""
def ifsupervision = ""
def processStatus = {String name, String status, String url, String duration ->
def color = colorMap.get(status)
return "<tr><td>${name}</td><td><font color=#${color}>${status}</font></td><td>${duration}</td><td>${url}</td></tr>"
}
pipeline {
agent { label 'ansible'}
//parameters {
//string(name: '', defaultValue: "", description: "")
//booleanParam(.....
//}
stages {
stage('Checkout') {
steps {
deleteDir()
script{
def job = JOB_NAME.split('_')
Application = job.first().toLowerCase()
if (Application.contains('-')){
Application = Application.split('-').first()
}
Platform = job[1].toLowerCase()
//si le basicat n'est pas envoyé en tant que argument du pipeline
if(Basicat == "null"){
Basicat = Application
}
if (!OriginBranch) { // Promotion request
println("PROMOTION REQUEST")
jobSource = "Promoted from ${JobName} #${JobNumber}"
//OriginBranch = "${JobName}-${JobNumber}"
tag = "Prom"
//copy artifact parameters.properties
step([$class: 'CopyArtifact',
projectName: JobName,
selector: [$class: 'SpecificBuildSelector', buildNumber: JobNumber]
])
//read parameters
def parafile = readProperties file: 'parameters.properties'
ModulesVersions = parafile['ModulesVersions']
Branch = parafile['Branch']
OriginBranch = parafile['Revision']
}else{ //Init request
println("INIT REQUEST")
jobSource = "Initialize ${JobName} #${JobNumber}"
Branch = "${OriginBranch}"
tag = "Init"
} //fin if origin branch
//for email template
descriptionList.add("From: ${JobName}-${JobNumber}")
descriptionList.add("Branch: ${Branch}")
//modules
def slurper = new groovy.json.JsonSlurper()
def modulesJson = slurper.parseText("${ModulesVersions}")
modulesJson.each{
if("${it.Install}" == "true"){
descriptionList.add("${it.Name}: ${it.Version}")
moduleList.add("${it.Name}")
versionList.add("${it.Version}")
components = "<tr><td>${it.Name}</td><td>${it.Version}</td></tr>" + components
if( it.OtherAction.size() > 0 ){
otheractionList.add(it.OtherAction.get(0))
}else{
otheractionList.add("null")
}
}
} //fin modulesJson.each
description = descriptionList.join('<br>')
currentBuild.description = "${description}"
manager.addShortText("${tag}", 'black', 'yellow', '0px', 'white')
println(moduleList)
println(versionList)
println(otheractionList)
} //fin script
} //fin steps
post {
success {
script {
stagesStep = stagesStep + processStatus('Checkout', 'SUCCESS', '---', '---')
}
}
failure {
script {
stagesStep = stagesStep + processStatus('Checkout', 'FAILURE', '---', '---')
}
}
aborted {
script {
stagesStep = stagesStep + processStatus('Checkout', 'ABORTED', '---', '---')
}
}
}
}
stage('PROMOTE-MERGE') {
//agent { label 'ansible' }
steps {
script{
println("OriginBranch: " + Branch + " || Branch or Revision: " + OriginBranch)
println("TargetBranch: " + Platform)
result = build job: 'Merge_Branch_Gitlab', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Action', value: 'mergebranch'), string(name: 'OriginBranch', value: OriginBranch), string(name: 'TargetBranch', value: Platform)], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: PROMOTE-MERGE") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('Promote Merge', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
stage('CHECK IMAGES') {
agent { label 'ansible' }
steps {
sh 'ls'
script{
def check_list = moduleList.join(',')
println("Modules to check: " + check_list)
result = build job: 'Check_Nexus_Image', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Branch', value: Platform), string(name: 'Modules', value: check_list) ], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: CHECK IMAGES") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('Check Images', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
stage('BigIP OUT') {
when {
allOf {
environment name:'NoSwitchBigIP', value: 'false'
}
}
steps {
script{
//def currentport = env.currentport
//def oldport = env.old_port
def oldport = '443'
def module_old = moduleList.join(' ')
result = build job: 'BigIP_PAC', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Modules', value: module_old), string(name: 'Platform', value: Platform), string(name: 'Halplatform', value: HalfPlatform), string(name: 'ServerPort', value: oldport), string(name: 'BigIPAction', value: 'down')], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: BigIP OUT") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('BigIP OUT', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
stage('INSTALLATION') {
steps {
script{
if ( otheractionList.size() > 0 && otheractionList.contains('CleanBefore') ){
println("Action clean up before installation")
def clean_modules = []
def clean_i;
for(clean_i =0; clean_i < otheractionList.size(); clean_i++ ){
if(otheractionList.get(clean_i) == 'CleanBefore'){
clean_modules.add(moduleList.get(clean_i))
}
}
def clean_list = clean_modules.join(',')
println("clean_list: " + clean_list)
result = build job: 'DockerAnsible', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Platform', value: Platform), string(name: 'HalfPlatform', value: HalfPlatform), string(name: 'InstallationStep', value: 'clean'), string(name: 'TemplateType', value: "${AwxTemplate}"), string(name: 'Modules', value: clean_list)], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: INSTALLATION/action: CleanBefore") }
}
println("installation step")
result=''
def install_list = moduleList.join(',')
result = build job: 'DockerAnsible', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Platform', value: Platform), string(name: 'HalfPlatform', value: HalfPlatform), string(name: 'InstallationStep', value: 'install'), string(name: 'TemplateType', value: "${AwxTemplate}"), string(name: 'Modules', value: install_list)], propagate: false
def buildstepvars = result.getNumber()
env['buildstep_number'] = result.getNumber()
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: INSTALLATION") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('Installation', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
stage('CHECK URL SUPERVISION') {
when {
allOf {
environment name:'Supervision', value: 'true'
}
}
environment{
LOG_JOB_NAME = 'Docker_Via_Ansible'
}
steps {
//checkout([$class: 'GitSCM', branches: [[name: '*/master']], browser: [$class: 'GogsGit', repoUrl: 'https://${GOGS_SERVER}'], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: '${GOGS_CMD}/jenkins/${Application}.git']]])
checkout([$class: 'GitSCM', branches: [[name: '*/master']], browser: [$class: 'GogsGit', repoUrl: 'https://${GOGS_SERVER}'], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: Application +'_'+ Platform]], submoduleCfg: [], userRemoteConfigs: [[url: '${GOGS_CMD}/jenkins/' + Application + '_' + Platform +'.git']]])
//git clone
sh 'sleep 5'
withCredentials([usernamePassword(credentialsId: 'api-AMP-ec3f39046b', passwordVariable: 'ampuserpwd', usernameVariable: 'ampuser')]) {
sh '''
set +x
joburl="https://vs300-amp-deploy-prod.dc.uro.equant.com/job/${LOG_JOB_NAME}/${buildstep_number}/consoleText"
curl -X GET --silent --user ${ampuser}:${ampuserpwd} ${joburl} -o log.txt
'''
sh '''
#set +x
if [ -f port.properties ]
then
rm -rf port.properties
fi
#current_port=$(cat log.txt | grep \'"msg":\\s*"\\s*[0-9]*\\s*\\"\' | tail -1 | grep -o "[0-9][0-9]*" | cat )
current_port=$(cat log.txt | grep \'"msg":\\s*"[0-9]*\\,[a-z]*.*"\' | sed \'s/"//g\'| cat )
echo "current_port=$current_port"
if [ ! -z $current_port ]
then
echo "current_port=$current_port" > port.properties
cat port.properties
else
echo "No port number found!"
fi
'''
}
script{
def filepath=Application + '_' + Platform +'/' + 'plateforme'+ HalfPlatform + '.properties'
def platf_env = readProperties file: filepath
//def moduleslist = platf_env['ModulesList']
def url = platf_env['UrlSupervision']
def if_file = fileExists 'port.properties'
println('url to check : ' + url)
if (url != null && url != "null"){
println(if_file)
if (if_file){
def port_env = readProperties file: 'port.properties'
def port = port_env['current_port']
url = platf_env['UrlSupervision'] + ':' + port
//set as env variable
env['currentport'] = port
println("Added port number: " + url)
}else{
println("No port found: for adding")
}
ifsupervision = "true"
//call job
//result = build job: 'Check_URL_Supervision', parameters: [string(name: 'URLSTOCHECK', value: url)], propagate: false
//if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: Check_URL_Supervision") }
} else {
ifsupervision = "false"
println("NOT launch job: Check_URL_Supervision ")
println('because url to check is: ' + url )
}
}
}
post {
always {
script {
if (ifsupervision == "true"){
println("if sup:" + ifsupervision)
//stagesStep = stagesStep + processStatus('Check Url Supervision', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
}
stage('GET USER DECISION') {
when {
//allOf {
anyOf {
environment name:'NoSwitchBigIP', value: 'false'
environment name:'NoBreakpoints', value: 'false'
}
}
steps {
script {
if ( otheractionList.size() > 0 && otheractionList.contains('Breakpoints') ){
println("Action: Breakpoints")
def break_modules = []
def break_i;
def break_j;
for(break_i =0; break_i < otheractionList.size(); break_i++ ){
if(otheractionList.get(break_i) == 'Breakpoints'){
break_modules.add(moduleList.get(break_i))
}
} //fin break_i
def nbre = break_modules.size()
def inc = 1
for(break_j=0; break_j < nbre; break_j++){
inc =inc + break_j
env.DECISION = input message: 'Total: ' + inc +'/'+ nbre + '\nDecision for: '+ break_modules.get(break_j) , parameters: [choice(choices: ['nothing', 'validate', 'rollback'], description: '', name: 'Decision')]
if(env.Decision == 'validate'){
val_list.add(break_modules.get(break_j))
env.VAL_DEC = 'validate'
}else if((env.Decision == 'rollback')){
rol_list.add(break_modules.get(break_j))
env.ROL_DEC = 'rollback'
}else{ println("Nothing to do")}
} //fin break_j
} //fin if
if(NoSwitchBigIP==false || NoSwitchBigIP='false'){
input message: 'Break for BigIP', parameters: [choice(choices: ['continue'], description: '', name: 'CarryOn')]
}
}
}
post {
success {
script {
stagesStep = stagesStep + processStatus('Get User Decision', 'SUCCESS', '---', '---')
}
}
aborted {
script {
stagesStep = stagesStep + processStatus('Get User Decision', 'ABORTED', '---', '---')
}
}
}
}
stage('VALIDATE') {
when {
environment name: 'VAL_DEC', value: 'validate'
}
steps {
script{
println("To validate: " + val_list)
def validate_list = val_list.join(',')
result = build job: 'DockerAnsible', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Platform', value: Platform), string(name: 'HalfPlatform', value: HalfPlatform), string(name: 'InstallationStep', value: 'validate'), string(name: 'TemplateType', value: "${AwxTemplate}"), string(name: 'Modules', value: validate_list)], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: VALIDATE") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('Validate', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
stage('ROLLBACK') {
when {
environment name: 'ROL_DEC', value: 'rollback'
}
steps {
script{
println("To rollback: " + rol_list)
def rollback_list = rol_list.join(',')
result = build job: 'DockerAnsible', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Platform', value: Platform), string(name: 'HalfPlatform', value: HalfPlatform), string(name: 'InstallationStep', value: 'rollback'), string(name: 'TemplateType', value: "${AwxTemplate}"), string(name: 'Modules', value: rollback_list)], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: ROLLBACK") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('Rollback', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
stage('BigIP IN') {
when {
allOf {
environment name:'NoSwitchBigIP', value: 'false'
//environment name: 'USER_DECISION', value: 'rollback'
}
}
steps {
script{
//def currentport = env.currentport
//def oldport = env.old_port
def currentport = '443'
def module_old = moduleList.join(' ')
result = build job: 'BigIP_PAC', parameters: [string(name: 'Basicat', value: Basicat), string(name: 'Modules', value: module_old), string(name: 'Platform', value: Platform), string(name: 'Halplatform', value: HalfPlatform), string(name: 'ServerPort', value: currentport), string(name: 'BigIPAction', value: 'up')], propagate: false
if ( result.getResult() != 'SUCCESS' ) { error("Deployment has failed at step: BigIP IN") }
}
}
post {
always {
script {
stagesStep = stagesStep + processStatus('BigIP IN', result.getResult(), "<a href=${result.getAbsoluteUrl()}>View</a>", result.getDurationString())
result =''
}
}
}
}
}
post {
always {
script {
colorStatus = colorMap.get(currentBuild.currentResult)
rewritetab = ["Application": Application, "Platform": Platform, "buildId" : currentBuild.id, "stagesTable" : stagesStep, "componentTable" : components, "color" : colorStatus, "jobStatus" : currentBuild.currentResult, "jobSource": jobSource]
checkout([$class: 'GitSCM', branches: [[name: '*/master']], browser: [$class: 'GogsGit', repoUrl: 'https://${GOGS_SERVER}'], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: '${GOGS_CMD}/jenkins/jenkinsfiles.git']]])
if(RECIPIENTS_JOB != null && RECIPIENTS_JOB != ''){ RECIPIENTS_JOB= TeamList + "," + RECIPIENTS_JOB }else{RECIPIENTS_JOB = TeamList}
if(MailTag != null && MailTag != ""){MailTag = "[" + MailTag + "]"}
def JobNumber = currentBuild.id
bodytest = readFile "email/endJobs.html"
rewritetab.each{ k, v ->
bodytest = bodytest.replaceAll("\\{"+k+"\\}", v)
}
emailext (to: "${RECIPIENTS_JOB} ${MailRecipient}",
recipientProviders: [developers(), requestor(), brokenBuildSuspects()],
subject: "[AMP]${MailTag} ${JOB_NAME} #${JobNumber} - [${currentBuild.currentResult}]",
body: bodytest,
mimeType: 'text/html');
}
}
}
}
}
test_utility.groovy
import com.michelin.cio.hudson.plugins.rolestrategy.*
import hudson.security.*
import hudson.security.Permission.*
def CreateRole(authStrategy, rolename, rolepattern, rolepermissions){
Role jenkinsrole;
Set <Permission> RolePermissionset = new HashSet<Permission>();
rolepermissions.each { p ->
def permission = Permission.fromId(p)
if (permission != null) {
RolePermissionset.add(permission)
} else {
println("Error with this permission ${p}")
}
}
println "\nCreating Role object with " + rolename
jenkinsrole = new Role(rolename, rolepattern, RolePermissionset)
println "\nCreating role " + rolename + " in jenkins matrix"
authStrategy.roleMaps.projectRoles.addRole(jenkinsrole)
Jenkins.instance.save()
println "role" + rolename + " has been created"
}
Manage bigIP
#!/bin/bash
TowerUrl="https://${IP_AWX_EIN}"
TowerUser=${TowerCredentials%:*}
TowerUserPass=${TowerCredentials#*:}
cd ${WORKSPACE}
##creation du fichier .tower_cli.cfg
echo "host = $TowerUrl" > .tower_cli.cfg
echo "username = $TowerUser" >> .tower_cli.cfg
echo "password = $TowerUserPass" >> .tower_cli.cfg
echo "verify_ssl = false" >> .tower_cli.cfg
################################################2 bloack
##get basicat group id
set +x
basicat_groupid=$(tower-cli group get --name ${Basicat} -f id)
#get basicat component group id
tower-cli group list -a --parent ${Basicat} | sed -e '/id/d' | sed -e '/=/d' | awk '{print $2}' > component.txt
compo_list=$(cat component.txt)
inventory="bigip"
modules="Modules="
bigipmember="BigIPMember="
bigipscript="BigIPScript="
bigippool="BigIPPool="
for compo in ${compo_list[@]}
do
#modules="${module}${compo},"
##get component list host in compononent_env
group_name="${compo}_${Platform}"
if_group_name=$(tower-cli group get -n $group_name | sed -e '/id/d' | sed -e '/=/d' | awk '{print $2}')
if [ -n $if_group_name ]
then
tower-cli host list --group $group_name | sed -e '/id/d' | sed -e '/=/d' | awk '{print $2}' > "${compo}_host.txt"
no_host=$(cat ${compo}_host.txt)
if [ ${no_host} == "records" ]
then
echo "null" > ${compo}_host.txt
fi
##obtenir le pool
compo_pool=$(tower-cli group get -n $group_name -f yaml | grep "BigIPPool" | sed -e 's/"//g' | sed -e "s/'//g" | grep -o "pool.*" | xargs echo)
if [ -z $compo_pool ]
then
compo_pool="null"
fi
###Obtenir boitier
boitier_list=$(tower-cli group list -a --parent $inventory | sed -e '/id/d' | sed -e '/=/d' | awk '{print $2}')
compo_boitier=""
for boitier in ${boitier_list[@]}
do
echo
output=$(tower-cli group list -a --parent $boitier | sed -e '/id/d' | sed -e '/=/d' | awk '{print $2}')
result=$(echo $output | grep -o "${group_name}" | xargs echo)
if [ -n $result ] && [ $result == "${group_name}" ]
then
compo_boitier=$boitier
boitier_script=$(tower-cli group get -n $compo_boitier -f yaml | grep "BigIPScript" | grep -o "F5.*pool.*.bat" | xargs echo)
if [ -z $boitier_script ]
then
boitier_script="null"
fi
break
fi
done
##fin boitier
member_list=$(cat "${compo}_host.txt")
for member in ${member_list[@]}
do
modules="${modules}${compo},"
bigipmember="${bigipmember}${member},"
bigipscript="${bigipscript}${boitier_script},"
bigippool="${bigippool}${compo_pool},"
done
else
echo " "
set -x
echo "there are no group named: $group_name for the component ${compo}"
set +x
fi
done
modules="${modules}null"
bigipmember="${bigipmember}null"
bigipscript="${bigipscript}null"
bigippool="${bigippool}null"
echo $modules > bigip_data.txt
echo $bigipmember >> bigip_data.txt
echo $bigipscript >> bigip_data.txt
echo $bigippool >> bigip_data.txt
set -x
cat bigip_data.txt
Trigger jenkins job/pipeline - API
#!/bin/sh
#set -x
# Declaring few variables used in the script
#JENKINS_URL="$1"
#JKS_UID="$2"
#JKS_APITOKEN="$3"
#JOB_NAME="$4"
#JSON_CONFIG="$5"
JENKINS_URL="https://jenkins.example.com"
JKS_UID="$1"
JKS_APITOKEN="$2"
JOB_NAME="Compilator"
JSON_CONFIG_PARA="$3"
SOURCEFILE="$4"
if [ -f $JSON_CONFIG_PARA ]
then
JSON_CONFIG=$(cat ${JSON_CONFIG_PARA})
else
JSON_CONFIG=$JSON_CONFIG_PARA
fi
##NON VARIABLE
TIMER=20
NOW=$(date +"%d-%m-%Y_%H-%M")
EXIT_CODE=0
PACKAGE_URL_GREP="package_url"
#PAAC_JOB_NAME=$(echo "$JOB_NAME" |sed 's/\([A-Z0-9]*\)_INIT\([A-Z0-9]*\)/\1\2/g')
#echo "${JENKINS_URL}/job/${PAAC_JOB_NAME}"
# Calling jenkins API to get the CRUMB
CRUMB=$(curl --silent --fail -u "${JKS_UID}":"${JKS_APITOKEN}" "$JENKINS_URL/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,%22:%22,//crumb)" -k)
CRUMB_STATUS=$?
if [ -z $CRUMB ] || [ "$CRUMB" == "" ] || [ $CRUMB_STATUS != 0 ]
then
echo "Fails to get connection token"
echo "Check your UID and TOKEN"
exit 1
fi
# Calling jenkins API to get the last build number of the build job
# Increment it by 1 to get the build number of your current build
# This will be used to promote the build in later step
LASTBUILDID=$(curl --silent -u "${JKS_UID}:${JKS_APITOKEN}" "${JENKINS_URL}/job/${JOB_NAME}/lastBuild/api/json?pretty=true" -k | grep -m 1 "id" | sed 's/[^0-9]*//g')
BUILDID=$((LASTBUILDID+1))
# Calling jenkins API to build the job to deploy
curl --fail --silent -X POST "${JENKINS_URL}"/job/"${JOB_NAME}"/build?token="$JOB_NAME" \
-u "$JKS_UID":"$JKS_APITOKEN" \
-H "${CRUMB}" -k \
--form json="$JSON_CONFIG" \
--form file0=@${SOURCEFILE}
sleep 20
SUBJOBID=""
while [ -z $SUBJOBID ] || [ "$SUBJOBID" == "none" ]
do
echo "Search build information"
curl --silent -u "${JKS_UID}:${JKS_APITOKEN}" "${JENKINS_URL}/job/${JOB_NAME}/${BUILDID}/api/json?pretty=true" -k > subjob.json
SUBJOBNAME=$(cat subjob.json| grep -m 1 "jobName" | cut -d ':' -f2 | sed 's/"//g' | sed 's/,//g' | tr -d ' ')
SUBJOBID=$(cat subjob.json| grep -m 1 "buildNumber"| sed 's/[^0-9]*//g')
if [ -z $SUBJOBNAME ] || [ -z $SUBJOBID ]
then
SUBJOBID="none"
else
echo "Information: ${JOB_NAME}#${BUILDID} ||| ${SUBJOBNAME}#${SUBJOBID}"
fi
sleep 10
done
# Calling jenkins API to get the last build number of the build paac job
# Increment it by 1 to get the build number of your current build
# This will help us to get the dynamic logs on the console from jenkins
#LASTJOBID=$(curl --silent -u "${JKS_UID}:${JKS_APITOKEN}" "${JENKINS_URL}/job/${PAAC_JOB_NAME}/lastBuild/api/json?pretty=true" -k | grep -m 1 "buildNumber" | sed 's/[^0-9]*//g')
#JOBID=$((LASTJOBID+1))
PAAC_JOB_NAME=$SUBJOBNAME
JOBID=$SUBJOBID
# Calling Jenkins API to get the console text of the job execution to display the dynamic logs on the pipeline
# Writing the Job Name and the Build ID in Build_Details file for reference
# Exit with error if "FAILURE" is received in the console text
link="${JENKINS_URL}/job/${PAAC_JOB_NAME}/${JOBID}/consoleText"
started="false"
while [ "$started" = "false" ] && [ "$TIMER" -gt 0 ]
do
sleep 1
console=$(curl -u "$JKS_UID":"$JKS_APITOKEN" --silent "$link" -k)
TIMER=$((TIMER-1))
if [ "$(echo $console |grep 'HTTP ERROR 404')" = "" ]
then
started="true"
fi
done
if [ "$started" = "false" ] || [ $TIMER -le 0 ]; then exit 1; fi
jenkins_file="jenkins"
gitlab_file="gitlab"
if [ -f $gitlab_file ]
then
rm $gitlab_file
fi
touch $gitlab_file
finished="false"
delta=""
while [ "$finished" = "false" ]
do
curl -u "$JKS_UID":"$JKS_APITOKEN" --silent "$link" -k > $jenkins_file
delta=$(diff -u $gitlab_file $jenkins_file |tail -n +4 |grep '^+' |sed 's/^+\(.*\)/\1/')
if [ "$delta" != "" ]
then
printf "%s\n" "$delta"
printf "%s\n" "$delta" >> $gitlab_file
fi
if [ "$(echo $delta |grep 'Finished: [A-Z]*')" != "" ]
then
finished="true"
echo "BuildNumber='${BUILDID}'" > Build_Details
echo "BuildJob='${JOB_NAME}'" >> Build_Details
fi
sleep 1
done
if [ "$(echo "$delta" |grep 'Finished: FAILURE')" != '' ]; then EXIT_CODE=1; fi
if [ "$EXIT_CODE" != "1" ]
then
packageurl=$(cat $jenkins_file | grep "${PACKAGE_URL_GREP}" | tail -1 | cut -d ':' -f3 |tr -d ' ')
echo "PackageUrl='https:${packageurl}'" >> Build_Details
fi
exit $EXIT_CODE