You are probably wondering why we hear that legacy flaws are still present in new software. Well, the answer is simple. Developers have a habit of reusing old code for most of their projects and the code is not reviewed for all potential flaws, but rather the approach tends to be similar to the slang ‘if it works, then don’t try to fix it’.
This does not mean that developers are lazy. The approach is favourable even by top-notch programmers because of the tight deadlines they have to meet, so time will always be above everything else when shipping new software.
However, this comes at a hefty price. While we hear of many hacking incidents, only a few of them are complex enough to break even the most impenetrable systems. Most of them were done by exploiting the already ‘implanted’ flaws in all software products. Everything except the operating systems can be deemed ‘hackable’ by most people with some knowledge of hacking.
The flaws go so deep that even some government departments are at high risk. Security analyst found out that some software in government departments is still based on older programming languages. But is this the future of programming? Of course not.
Security analysts in the field say that the problems with legacy flaws may likely increase, but they don’t have to. The real problem is that, by focusing exclusively on shoving new software on the market, companies forget about security completely. A better approach here is to split project development into two major components, development and testing, which could work in parallel. This way, a lot of bugs could be fixed and major security bugs flagged before the software hits the market.
Thank you CNET for providing us with this information
Image courtesy of nikopik