This website uses Google cookies to provide its services and analyze your traffic. Your IP address and user-agent are shared with Google, along with performance and security metrics, to ensure quality of service, generate usage statistics and detect and address abuses.More information

Ver sitio en español Go to homepage Contact me
viernes, 14 de julio de 2017

The importance of having a good log

We often neglect, due to lack of time, inexperience or simple laziness, to provide our developments with good mechanisms to trace the activity, which results in a substantial increase in the difficulty and time needed to determine the causes of certain errors that aren't detected in the moment they happen, but when the state of the system may have changed substantially.

The time you dedicate to this part of the application is never wasted time, but quite the opposite. Usually, in simple developments it is often unnecessary to complicate our existence with additional developments that will not bring us any added value, but the matter changes substantially when the system we are working on involves several simultaneous processes and one or more databases. Here, finding the cause of an error that may be unnoticed for several hours or days, or a bottleneck in the mess of procedures can be a headache that can last for days or even weeks, sometimes even months or years.

The log is not just a file in which errors are registered. In the log can be recorded all the information that may be relevant to study the history of the evolution of the system. A distributed system can have a good number of different log files, in addition to tracking information recorded in the databases. If all this is accompanied by the date and time that the events occurred, it is trivial to reconstruct what has happened in the system at any given time, at least in comparison with the headaches resulting from trying to find out it without this valuable resource.

You may think that logging can negatively affect the performance of the system, but this is not necessarily true. It does not take too long to write a line in a text file, and it can be done asynchronously, since it is not relevant for normal system operation. It also does not cost anything to systematically record the date of creation of a record of the database, it is only to add a simple field of type DateTime. If this is included in the design from the beginning, the added workload is minimal with respect to the subsequent savings in monitoring and debugging time.

Next, I will comment on some of the steps I follow to make life easier once the systems go into production, I assure you that it works wonderfully and you will never be grateful enough to have followed my advice. If you already do it, you sure know what I'm talking about.

Activity tracking in the database

You may have had to face sometime with a database where you are continuously creating a multitude of records, many of them generated by processes that batch insert or update data in a multitude of different tables. If something goes wrong, or there are performance issues, it is a hell to find out when, where and why such problems are occurring.

The solution is easy. Although it is not necessary to do this in all tables, you should never be stingy with datetime fields. Always add a field with the date of creation of the record, and perhaps another with the last modification date (although this is much less useful) and, of course, all the date times necessary to indicate the different changes of state of the record (closure, verification date, etc.). This serves to construct SQL queries with the union of all the records that we want to analyze ordered by the date of interest.

This way, it is easy to detect, for example, the exact point where a performance problem occurs in a process consisting of integrating a large number of records into several tables, simply watching the time elapsed between the creation of a record and the next. The point where we detect a suspicious jump will serve as a clue to locate the point of the code where the record is being created, and therein will find the source of the problem.

It also helps to determine the order in which certain events have occurred in order to locate synchronization problems.

Something that is not so easy to do, and can also cause performance or storage problems, is to store, for certain sensitive information, a history of changes in the records, including deleted records, also with their corresponding date and time. In any case, it is not appropriate storing this together with production data, it is always better to do so in different databases or in schemes that use different data files.

Application activity log

Although usually the operating systems provide event logging services, such as for example in Windows, I always prefer to develop my own registry mechanisms, because it allows me a total control of the log system.

The simplest is to resort to the injection of dependencies to be able to create several components that implement different registry mechanisms, which can even be chained to register the data in different media, such as the system registry, a database, text files with different formats or even a class that does nothing and can serve to deactivate the registry without having to check everywhere if the log is active or not.

To do this, it is enough to define an interface that will be common to all the classes that implement the registry task; it may suffice with a single method to which a message and a degree of severity can be passed. You can use this information to decide which events are recorded and which are not.

It is important that the log entry point be centralized and unique to avoid errors arising from simultaneous access to the same file or so. In an MVC application, for example, it is easy to create a class derived from ActionFilterAttribute and add it to the filter set in the RegisterGlobalFilters method of the FilterConfig class. The overridden OnActionExecuted method, for example, will be called after an action is executed on any one of the controllers.

In a desktop application, you can use, for example, a static method of one of the classes that are accessible throughout the scope of the application. This method should have access to some kind of configuration mechanism that allows determining the class that is going to be used to log, so that we can change it without stopping and restarting the application. In multi-tasking environments it will also usually be necessary to implement appropriate mechanisms so that only one process accesses the log at a time.

As you can see, it is very easy to implement logging mechanisms. I assure you it is worth it and I highly recommend it.

Share this article: Share in Twitter Share in Facebook Share in Google Plus Share in LinkedIn
Comments (0):
* (Your comment will be published after revision)

E-Mail


Name


Web


Message


CAPTCHA
Change the CAPTCHA codeSpeak the CAPTCHA code