Dev Ops does not means
mean breaking the silos in between the developers and the operations. We need
to evaluate each “manual operation” in the delivery pipeline if we think of
automation. And database changes certainly require an extensive process, and
hence it deserves “deep analysis” for DevOps implementation. And if you want
to learn DevOps,
you can contact Nearlearn. We provide complete DevOps Online Training for all DevOps certifications . Nearlearn also is the number one computer training
institute and among the top five computer training institutes in India.
A solution depends
upon the complexity and “coupled behavior” of the architecture. And it also
depends upon the rigidity which effects the change process.
Our objective is to
provide a perspective for automating the database changes. And it is not merely
through the explanation of why these database changes are difficult. Though it
also through a real-world example that explains how the DevOps simplifies the
processes.
Optimization of the
code and trouble shooting is now easy through the integrated errors,
performance insights at the code level, and logs.
DevOps for Database
The changes in the
“Database” begin with the developers committing the changes. They make use of
it to create the SQL format. And the changes are reviewed by a database
administrator who looks after the databases. He has the best knowledge of the
“Database.” And, it can ensure correct changes. And it’s not merely in
performance, but in the data integration as well.
And it does look like
an essential process? However, the issue arises that the DBA generally gets
involved slightly before the deployment to the production, and by then, it’s
too late and expensive the proper changes.
It does not mean that
the developers require someone other reviewing what they do. However, this is a
typical process for some big companies.
The DevOps for the
databases is “simply”
related to shifting the process to the left, and through automation, you can
make the process run more smoothly. However, it’s not just automation.
“Automation is merely a part.”
There are situations
when we require something very “Complex,” not better performed through
automation. Suppose you have described the “Database” that you use. And you
don’t need to the re-architecting. And that certifies that you don’t need any
complex changes quite often, and the automation caters to you the
implementation of the future alteration in a predictable and repeatable way.
And this is the same all the time.
And to be truthful,
the automation of the table alteration by adding a new column is easy. And the
main problem is the state of which the “database” needs to “take care of.” In
case the “Database” has “huge data,” making changes take “huge-time,” and it
blocks all the changes like the CRUD operations.
The automation is
“however,” is one out of a lot many required in DevOps implementation. And in fact, it’s the
simple part. Hence always go for automation and avoid the manual changes
frequently.
No standard among the database Engine
The database
alteration requires an all the time standard as each “Engine” comes with
different standard management. And its impact varies from one Engine to the
other. The “SQL Server” indexes not get affected like the “Oracle,” or the
MySQL indexes are alike.
The SQL is not just
one database engine that is in common. Though even then, the statements cater
to us varied results.
In the coming times,
this industry will have an easy time as there is “standardization” of how we
deal with the databases. However, meanwhile, ensure you are up with the plan of
changing the database engine. Make use of the object-relational framework and
various tools for making the job easy. You find some examples in the upcoming
section of this blog.
Tightly-coupled designs
Majority of the
instances, the system architecture creates the problem in the “Database.”
However, there are issues other than database changes in the Devops
implementation. Nowadays, the norm is the distributed system, and we have
architectural patterns such as the microservices, which has solved database
coupling through the “database” for each microservices.
Through
“Microservices,” we can decouple the “database.” The only way is the microservices
interact with the data through the exposed methods from the service. And it’s
as such rather than moving directly towards the “database.” And this is even if
it’s easy and possible to go that way.
When the “database” is
used for storage purposes only, it’s easy to make the changes. And you store
the data for analysis. And that is why we shift the data to the “Data Warehouse,” where there is very little chance for the
data to change.
The transactional data
has the “data” that is needed. And this is known as the CQRS architecture
pattern. And in some situations, we have the data for a week and sometimes for
a month.
No culture and processes that are well established
Another essential feature
of the “database” is the alteration in the culture and the needed processes.
Leaving the database
changes till the end of the workflows hints towards “poor” configuration among
the teams. It can be that the teams do have the same goals in their brain. Or
there can be an ego problem when some feel that they need to require help, and
the process happens to be merely the blocker.
You need not wait for
the DBAs to review the changes till the final stage. They should be a part of
asap in this process. And with time, the developers, operations, and the DBAs
agreement on how the database changes need doing. And more experienced the team
becomes on review processes, the smoother the processes execute.
When there is an
anticipated collaboration among the teams, the best thing can come up, and you
need to make one main goal that everyone recognizes.
The Databases Technical practices
We are through how the
“database” originates a specific issue in the Data Ops and how the environment becomes apt.
However, others, rather than just “technical practice,” boost the DevOps
implementation with the changes in the “Database.”
Migrations to the rescue
The Migrations happens
to be the script covering the database changes, which ideally are idempotent,
which means that it does not matter, times in the number you run the script,
the changes apply only once. It’s nice to have the “script” in the version
control such that you keep in mind the changes and move “easily” back and
forward with the changes.
In different words,
migrations happen to be the database changes as the code. You can execute the
same migration in various environments. And it should lead to the same result,
which stands with the local environment, the developer machine.
Production like Environment practice
Let’s discuss the
technical practice, which is simple in implementation. However, there is little
requirement for discipline testing.
You should test the
changes before you apply them in the production environment. If the table data
is large, and suppose as large as its not cheap to replicate in the various
environment from the production- you use, that you can simulate at least the changes
when the data is significant. That ensures the changes don’t take for always.
And you don’t block a table for a lengthy period.
You can make use of
the Containers. You will find them “easy” and cheaper, and if something goes
wrong, you can delete everything and start afresh.
Database computerization tools
We need to talk about
some tools to keep talking about the databases. And you will find numerous
tools, and you will find new ones frequently. However, some of them are quite
popular, and a few of them mentioned below:
- Liquibase (this is free)
- Datical (this is Liquibase’s paid version)
- Delphix (this is the Microsoft Stack)
- Redgate (this is not merely for the database changes)
- DBmaestro (this is the for
the database)
And also, tools for
database engines, there are outlines that provision migrations too:
- Entity Framework
- Compose
- GORM
- Hibernate
Let’s at first know
about the Entity Framework in the .NET Core.
The practical guide for making use of the Entity Framework
You will find numerous
powerful tools for automating the database alteration. Consider one approach,
where you easily automate the tools such as Jenkins or the VSTS through the
Entity Framework (EF) for the .NET Core applications.
If you look at the
entity framework, you will find that the data updating is a repeatable process.
Let’s build a project and store the code at Git Hub . And let’s focus on the
database changes.
We are going to make a
small change, and you will find how EF comes into play. You can consider any EF
code first based project for this The process goes like as below.
Set the process locally
You will require the
.Net Core installed and then run the application with the IIS Express option.
Also, you will require the SQL server instance. We need to see how the changes
get applied to the database as you move forward.
Begin with making the
changes in the input parameters avoiding the database spinning as the
application launches. We can do the database updating manually through the use
of the EF migration commands. Now open the project properties by right-clicking
the project name, and then vary the debug parameters.
You require a proper
configuration for connecting to the “database” and with the database password.
You can make the changes to the password through the app settings JSON.
Now, pick the project
name and then run it by clicking the “Debug” button. And as the application
starts, it’s not going to work, as the database is not existing and we have not
run the first migration, which makes the database.
Now initiate the database.
Open the terminal. You
can make use of the command line of the visual studio. Now, run the below command from the root folder of the project. It
should be such that the EF makes the database schema.
Now you can connect the
database and then look at the tables necessary, created.
Alter the application
through the addition of the new column. For that, you need to make the changes
in Models/stud.cs. And then, you need to append the new column.
Now you need to move
to view and then append the column, and you need to add a few HTML lines of
codes.
And adding the new
column, you need to change the code of the corresponding view in the “Make.cs.”
You have an action event in the controller, and for that, we have “view,” and
you have to make the changes in the view.
Before the application
runs again, let’s make the “migration in the EF,” such that the next time you
run the database update, the EF processes any remaining migration. For doing
that, you need to run the below command.
For exploring the
solution, a little, and you will find the new file created with the details of
the migration. And always keep in mind that each of the migration files needs
to be versioned.
Now run the application
once more. The UI gets updated, though it is not going to work as the database
is not updated yet. You now need to run the update command once more for
applying any of the migration remaining.
Now refresh the
application. It will work fine now.
The next time anybody
needs to make any change creates a new migration. For updating, we need to make
use of the update EF command again. And as you become habitual, you will be
better at database change automation. Now keep in mind, DevOps for the database
has a lot more than the above technical practices.
What about rollbacks ?
It’s also possible to
roll back any changes in the database after “update” in the destination database with the most current migrations. For
doing that, you only need to run the below command:
It removes the most
current migration. And that means if you are up with one or more migration, the
above command rolls back the newest one. And you require to run again for
keeping reverting the database alteration.
What if you still a prerequisite to creating scripts ?
While you again adjust
the process, you need to check what the EF does in the database. It should be
before you commit any of the changes. You can have a look at the SQL format for
reviewing the changes. And the EF comes with the command for generating the SQL
format well understood by the DBA.
For generating the
migration in the SQL format, you need to run the below command:
Each of the SQL
statements appears in the terminal. And then, you can store this output for a
future review in the “output.”
And that’s all. Above
you have practiced on your local. You now need to automate this new process
with the help of the Jenkins or the VSTS. You run the update command, which you
set in the deployment pipeline once the application deploys. And, developers make
use of the other Commands to generate the migrations. And then keep them within
the version control.
Swing database fluctuations to the left
As you saw, you don’t
have any magic formula required for implementing the DevOps for the databases.
You will find a lot of other things as well. However, the first step is your
willingness to get out of the comfort and perform the processes better.
If you want to keep
things simpler from the processes over to the architecture, you need to focus
on the decoupled architecture, which leverages you in making the changes
without any problem. And you teach yourself.
It’s not so difficult
to make the changes in the database. The real problem is in the implications of
losing and damaging a portion of the data. In automation, the two key things
are repetition and consistency. And you need to practice a lot, and not merely
do the deployment to the production.
You can make use of
the Stackify application performance management tool Replace.
You can contact Nearlearn
for your DevOps Online Training. We provide DevOps training in Bangalore and in fact, you can
contact us from any part of the world through our phone or online form on our
site. Just fill it and submit it, and one of our customer care executives will
be contacting you. And what else you get:
You have the freedom
to choose from DevOps online training and classroom training.
Chance to study from one of the best faculties and one of the best DevOps
training institutes in India
Nominal fee affordable for all
Complete training
You get training for tackling all the nitty-gritty of DevOps.
Both theoretical and practical training.
And a lot more is waiting for you.
You can contact us
anytime for your DevOps Online Training and from any part of the world. Nearlearn caters to one of the best DevOps training in India.
No comments:
Post a Comment