Home Source code Secure coding practices – the three key principles

Secure coding practices – the three key principles


All security vulnerabilities are the result of human error. Most vulnerabilities in web applications and API security issues are introduced by the developers. Therefore, the best approach to building secure applications is to do everything possible to avoid introducing such errors instead of fixing them.

You can find several detailed guides on how to create secure code during application development, for example the one provided by the Open Web Application Security Project (OWASP). They focus on details such as input validation, output encoding, access control, communications security, data protection, cryptographic practices, error handling, least privilege principle, etc. strategic point of view.

Principle 1: Raise awareness and educate

In most cases, developers introduce security risks into the source code simply because they are not aware of the risks. While universities often focus on teaching details like formal verification, many of them don’t offer dedicated cybersecurity courses and don’t even mention topics like injection attacks Where cross-site script (XSS). This is especially the case with older developers who took such courses several years ago when there was no security hype yet.

Universities also teach a limited number of programming languages, so developers are in most cases self-taught, and some security concerns are very specific to the programming language. For example, you won’t find a risk of a buffer overflow in Java or C #. Even though the course teaches a language in detail, it rarely focuses on coding best practices related to application security in that language.

To ensure that your software development teams don’t make mistakes due to lack of awareness, understanding, or gaps in education, you need to approach the problem strategically:

  • Your development managers should not only be aware of security risks, but they should also be the driving force behind security. A developer without security awareness can be educated, but a development manager who does not realize the importance of security will never become the leader in security.
  • Make no assumptions about the knowledge of the developers. Validate it first and if that’s not enough, offer internal or external training sessions dedicated strictly to secure coding standards. It is not the best idea to absolutely require security knowledge from new hires as this will severely limit your recruiting abilities and developers can easily learn as they progress.
  • Be aware that no matter how well your developers understand security, new techniques and attacks appear very often due to the speed at which technology advances. Some of these techniques require very specific security knowledge that can only be expected from someone in a full time security related position. Expect your developers to make mistakes and don’t punish them.
  • Don’t separate your development teams from your security teams. The two should work very closely together. Developers can learn a lot from security professionals.
  • Don’t assume that the nature of your software reduces your security requirements in any way. For example, even if your web application is not publicly accessible but only to authenticated clients, it should be just as secure as a public application. In general, don’t look for excuses.

Principle 2: introduce several levels of verification

Even the most knowledgeable and educated developers still make mistakes, so trusting them to write secure code isn’t enough. You need automatic auditing tools that work in real time during development to help them realize their mistakes and follow up with appropriate mitigation.

In an ideal situation, software should be tested using the following tools and methods:

  1. A code analysis tool integrated into the development environment. Such a tool prevents basic errors immediately when the developer types the code.
  2. A SAST (Static Application Security Testing) solution that works as part of the CI / CD pipeline. Such a solution analyzes the source code before its construction and reports potential software vulnerabilities. Unfortunately, SAST has a lot of drawbacks, including a high level of false positives.
  3. a SCA (software composition analysis) solution that works as part of the CI / CD pipeline. Since most of the code these days doesn’t come directly from your developers but from the open source libraries they use, you need to help them make sure they are using secure versions of those libraries. Otherwise, you’ll have ticking time bomb vulnerabilities just begging to explode.
  4. A DAST (dynamic application security testing) solution that works as part of the CI / CD pipeline. Such a solution scans the application at runtime (after compilation – without access to the source code) and reports real security vulnerabilities. In the case of such software, the performance is very important (the analyzes are very intensive) as well as the certainty that the errors reported are real (proof-of-exploit).
  5. Supplementary manual penetration testing for errors that cannot be detected automatically, for example, business logic errors. However, this requires specialized security personnel and is time consuming, so it is often only done in the later stages of the Software Development Lifecycle (SDLC).

However, early security testing takes a lot of time and resources. Therefore, a trade-off is often necessary between the time and effort required to perform the tests and the quality of the results. If such a compromise is required, the selection of a fast DAST scanner this provides proof of exploitation and comes with SCA functionality is the best choice.

Principle 3: Test as early as possible to promote accountability

To achieve optimal code quality, it is not enough to have secure coding requirements and secure coding guidelines and a testing infrastructure in place. Teams should not only feel obligated to follow secure coding principles during the development process and do so because their code will be tested, but they should also feel that writing secure code is in their best interest as well. Secure coding doesn’t just need rules and enforcement, it needs the right attitude.

A shifted approach to the left, like the one described above, has many advantages, one of which is that developers realize that they are an integral part of the security landscape. They take responsibility for the security of the code and realize that if they make a mistake, they are going to have to fix it immediately and not rely on someone else to do it later.

Of course, you can test your application for security vulnerabilities just before it goes into production or even into production (right shift). However, it will cost you a lot more than if you were moving to the left. The software will have to go through all the stages, which involves other resources, not just developers. The developer will not remember the code they worked on or the fix may be assigned to a different developer than the original one and therefore the developer will need more time to find and remove the vulnerability. As a result, late testing can delay discharge even by several weeks.

Not just security policies

In conclusion, we would like you to realize that security policies, while necessary, are not enough if they are seen as a limitation and not an improvement. Safety starts with the right attitude when building apps. And even the best tools used to maintain security must be used correctly in the process so that they are seen as useful and not as a burden.


Tomasz Andrzej Nidecki
Technical content writer

Tomasz Andrzej Nidecki (also known as tonid) is a technical content writer working for Acunetix. Journalist, translator and technical writer with 25 years of IT experience, Tomasz was editor-in-chief of hakin9 IT Security magazine in its early days and used to run a large technical blog dedicated to email security.