Přijat k publikování / Received for publication 29. 9. 2025
Learning from previous incidents is considered one of the key principles of effective safety management (Kjellén, 2000). The learning process includes several steps, such as information gathering, dissemination, and knowledge exchange (Drupsteen et al., 2013; Littlejohn et al., 2017; Weibull et al., 2020). A crucial element for information dissemination within the learning process is the databases of accidents and near misses. From the accessibility perspective, databases of industrial accidents or near misses can be categorised into three groups: internal company databases, restricted access databases, and publicly accessible databases.
The article focuses primarily on publicly accessible databases. Publicly accessible databases are databases where information can be accessed without restrictions by interested parties. Examples of such databases include eMARS, ARIA, and possibly TUKE, ZEMA, and others. Public databases can be a powerful tool for ensuring organizational learning not only within businesses but also within all organizations that contribute to ensuring safety, including government agencies, research institutions, companies, and the wider professional community, among others (Le Coze, 2013).
Public databases are utilised by users with varying degrees of knowledge regarding how the database functions, how the information is classified, and the advanced search capabilities it provides, etc. Users naturally draw upon their experiences with search engines, such as Google, and apply them when searching through the databases of industrial accidents. Databases are utilised by individuals with different preferences regarding webpage design, information sorting, and the like. How information is sorted, how the web portal is designed, etc. affects the usability of the database and thus the user's success in getting the right result.
Usability is defined in technical standard ISO 9241-11:2008 as „extent to which a product can be used by specified users to achieve specific goals with effectiveness, efficiency and satisfaction in a specified context of use”. In the field of software engineering, the definition given in ISO/IEC 25010:2011 is used, which is similar to: „degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use“.
Usability evaluation has several benefits. These include identifying issues and difficulties that users may encounter when using an application. Based on these findings, modifications and enhancements can be made to the user interface to make it more intuitive, easier to operate and more convenient for users. Good usability enables users to work more efficiently and reduces time spent on orientation and learning how to use the application. Users can perform their tasks more quickly and easily, which increases their productivity and effectiveness. Additionally, good usability reduces the risk of errors and mistakes.
According to available information, the usability of web portals with publicly accessible information on industrial accidents has not yet been assessed and discussed. However, usability can significantly affect the user's success in finding information and thus influence the extent to which lessons are learned from past emergencies. The aim of this paper is to assess the usability of three selected publicly accessible databases. These insights may be utilised to propose entirely new databases or improve the functionality of existing ones.
For the purposes of the study, three publicly accessible databases of industrial accidents were chosen. These are ARIA (ARIA, 2025), eMARS (eMARS, 2025) and MAPIS (MAPIS, 2025). The first two databases were selected due to their relatively high popularity in both the scientific community and among users from other fields of human activity throughout Europe.
ARIA is a database of industrial incidents, which has been managed by the French Ministry of Environment, Sustainable Development and Energy since 1992. Presently, the database contains more than 63,000 entries (ARIA, 2025). The language used in the database is French and English.
The eMARS database currently contains approximately 1200 entries. It is a component of a serious accident reporting system, developed on the basis of the EU Seveso Directive 82/501/EEC (eMARS, 2025). The language used in the database is English.
The MAPIS database was selected because it is a national database used in the Czech Republic. It contains a total of 75 records of serious accidents. The database is operated by the Research Institute for Labour and Social Affairs (RILSA, 2025). The language used in the database is Czech.
A range of approaches can be employed for usability evaluation, such as user trials, questionnaires, interviews, heuristic evaluation and cognitive walkthrough (Wronikowska et al., 2021). The authors of the study follow Jeng's (2005) methodology and use a questionnaire survey for usability evaluation. In accordance with ISO 9241-11:2008, three attributes were evaluated: Effectiveness, Efficiency, and Satisfaction. In the initial phase, respondents were queried on their competency level in the English language and their experiences with the databases being tested. For this purpose, the first questionnaire (see Appendix A in supplementary material) was employed.
For the purpose of assessing the first two attributes (effectiveness, efficiency), a questionnaire consisting of 5 questions was developed (see Appendix B supplementary material). These questions are designed in a way that simulates the expected use of the industrial accident database. That means:
The evaluation was conducted using both quantitative criteria (such as the time required to achieve results, number of mouse clicks, etc.) and subjective assessments from respondents. A Likert scale with five levels was used for this purpose, leaving space for respondents to provide comments/opinions.
In the final stage, the satisfaction of individual respondents with using the databases was evaluated. Satisfaction was examined in the areas of ease of use, organization of information, clear labelling, visual appearance, and so on. A questionnaire was used to evaluate satisfaction, which is included in Appendix C. This questionnaire contains a total of 9 questions. Respondents rated their satisfaction using a Likert scale and textual responses. The usability evaluation scheme used in the study is evident from Table 1.
|
Attribute |
Evaluation Description |
|
Effectiveness |
· Percentage of answers that are correct. |
|
Efficiency |
· How much time is needed to perform the task correctly? · The number of mouse button presses required to correctly perform a task. · The number of movements on the mouse wheel for a correctly performed task. · The length of the path when moving the mouse leading to a correctly performed task. · The description of the respondent search strategy leading to the correct performance of the task. |
|
Satisfaction |
· Satisfaction will be measured for individual questions using a five-point Likert scale. Where the number 1 means easy to use the database/high satisfaction, and 5 means difficult to use the database/low satisfaction. |
Tab. 1: Assessment Description of Individual Attributes in the Usability Analysis
A total of 24 individuals were selected for usability analysis, primarily university students. During the initial phase of the experiment, each student responded to questions regarding their experience with databases for industrial accident research as well as questions regarding their English level (see Appendix A questionnaire). In the following step, students completed tasks defined in the Appendix B questionnaire for each database. Tasks were completed sequentially. For the collection of quantifiable data, the software OdoPlus was utilised. Following the completion of all tasks outlined in the questionnaire (see Appendix B), students independently responded to the questions in the questionnaire provided in Appendix C.
TIBCO Statistica 14 software (TIBCO, Palo Alto, CA, USA) was used for the statistical processing of the measured data. All tests were performed at a significance level of α = 0.05. Due to the nature of the collected data non-parametric Kruskal–Wallis multiple comparison test was used to test the hypotheses that statistically significant differences occur at the significance level.
In the first phase, respondents were assessed on their level of knowledge of the evaluated databases and their English language proficiency. To accomplish this, a questionnaire was utilized where respondents selected an answer that best reflected their belief of their knowledge level. Generally, letter A denotes the lowest knowledge level, while letter E indicates the highest level. Appendix A features a table providing a closer description of each level.
Additionally, Figure 1 displays the histograms of responses. "The height of the column represents the frequency of responses for each category. From the histograms, it is evident that the database MAPIS has the highest frequency in category A as compared to other databases. However, this database also has the highest frequency of responses in category E. Regarding the evaluation of English proficiency, level C has the highest frequency, which is described as: „Understanding complete sentences and their meanings. However, unfamiliar words appear frequently in the text, necessitating their lookup in a dictionary.“
Fig. 1: A) Frequency of Responses to the Question of the Level of Knowledge of the Evaluated Databases and B) Level of English Knowledge
Effectiveness was measured by the proportion of correct responses out of the total number of responses. A value of 1.0 represents 100% accuracy, meaning all respondents answered the question correctly. Conversely, a value of 0 represents a situation in which none of the respondents answered the question correctly. Table 2 lists the value of these proportions.
|
Question No. |
MAPIS |
eMARS |
ARIA |
|
1 |
0.95 |
0.88 |
1.00 |
|
2 |
0.86 |
0.83 |
0.88 |
|
3 |
0.18 |
0.79 |
0.58 |
|
4 |
0.32 |
0.58 |
0.88 |
|
5 |
0.86 |
0.46 |
0.96 |
|
Average Value |
0.64 |
0.71 |
0.86 |
Tab. 2: Effectiveness of Responses
A 100% success rate was recorded in only one case, specifically for question 1 when searching in the ARIA database. Conversely, the lowest rate was recorded for the MAPIS database in response to question 3, with only 18 % of respondents answering correctly. The highest average percentage of correct answers across all questions was found when searching using the ARIA database.
Efficiency was measured using four metrics: mouse movement length, mouse button clicks, mouse wheel rotations, and time. Only correct answers were considered. Average values of the measured metrics are presented in Tables 3 to 6. The tables show that the highest values of the measured metrics are achieved in the case of:
|
Question No. |
MAPIS |
eMARS |
ARIA |
|
Mouse Movement Route (m) |
|||
|
1 |
8.19 |
16.72 |
9.94 |
|
2 |
9.08 |
11.99 |
14.24 |
|
3 |
21.04 |
24.63 |
18.61 |
|
4 |
17.85 |
28.34 |
16.02 |
|
5 |
24.38 |
39.99 |
23.45 |
Tab.3: Measured Average Values of the Length of the Mouse Movement Route
|
Question No. |
MAPIS |
eMARS |
ARIA |
|
Number of Mouse Button Clicks |
|||
|
1 |
40.19 |
57.76 |
56.71 |
|
2 |
39.37 |
51.20 |
65.05 |
|
3 |
70.00 |
108.58 |
91.43 |
|
4 |
92.00 |
123.57 |
80.62 |
|
5 |
126.35 |
213.09 |
107.52 |
Tab. 4: Measured Average Values of the Number of Mouse Button Clicks
|
Question No. |
MAPIS |
eMARS |
ARIA |
|
Number of the Mouse Wheel Rotation |
|||
|
1 |
53.39 |
210.30 |
54.73 |
|
2 |
71.39 |
109.11 |
182.40 |
|
3 |
118.50 |
252.84 |
217.38 |
|
4 |
115.00 |
324.29 |
178.50 |
|
5 |
274.74 |
705.45 |
302.91 |
Tab. 5: Measured Average Values of the Number of the Mouse Wheel Rotation
|
Question No. |
MAPIS |
eMARS |
ARIA |
|
Time (min) |
|||
|
1 |
3.43 |
6.75 |
3.75 |
|
2 |
2.94 |
4.88 |
6.83 |
|
3 |
9.11 |
9.09 |
8.08 |
|
4 |
7.13 |
9.52 |
6.44 |
|
5 |
8.83 |
13.38 |
9.83 |
Tab. 6: Measured Average Values of the Time Required to Reach a Response
On closer inspection, it is evident that the measured values are highly variable. If the hypothesis is tested to determine whether there are statistically significant differences between groups of measured values at a significance level of α = 0.05, the following applies to:
|
Length of Mouse Movement Route: |
There is statistically significant difference between the ARIA and eMARS databases (. The -value for the eMARS and MAPIS databases is very close to the significance level (. There are no statistically significant differences in measured values for the other questions. |
|
Number of Mouse Button Clicks: |
For question no. 5 a statistically significant difference was found between the ARIA and eMARS databases (. No statistically significant differences were observed for the measured values. |
|
Number of the Mouse Wheel Rotation: |
Regarding question 1, there were statistically significant differences between the ARIA database and other databases (. There are also statistically significant differences ( between the MAPIS and eMARS databases. Concerning question 2, there is a statistically significant difference between the ARIA and MAPIS databases (. For question 5, the p-value is close to the significance level when comparing the eMARS and MAPIS databases (. There are no statistically significant differences between the measured values for the other questions. |
|
Time: |
There are statistically significant differences between the eMARS database and other databases for question 1. For question 2, there is a statistically significant difference between the ARIA and MAPIS databases. In the case of the ARIA and eMARS databases, the calculated p value is close to the significance level (p = 0.07). Question 5 has a calculated p-value close to the significance level for eMARS and MAPIS databases (p = 0.07). |
The level of satisfaction was assessed based on a questionnaire containing a total of nine questions. All questions provided space for respondents to provide additional comments. Satisfaction was measured using the Likert scale for four of the questions. These questions included:
In the case of the eMARS and ARIA databases, a total of 23 respondents answered, while in the case of the MAPIS database, 21 respondents answered. From an attractiveness perspective, database ARIA was rated the best, while database eMARS was rated the worst. Respondents rated the navigation on the web portal best in the case of databases MAPIS and ARIA. The search function was rated best by respondents in the case of database ARIA. The organization of information was rated best in the case of database MAPIS.
|
Question No. |
MAPIS |
eMARS |
ARIA |
|||
|
Mode |
Mode Frequency |
Mode |
Mode Frequency |
Mode |
Mode Frequency |
|
|
1 |
2 |
9 |
3 |
8 |
1 |
11 |
|
6 |
1 |
8 |
4 |
7 |
2 |
13 |
|
7 |
2 |
6 |
2 a 5 |
6 |
2 |
14 |
|
8 |
1 |
10 |
1 |
7 |
2 |
13 |
Tab. 7: Mode of Answers to Questions Aimed at Satisfaction Evaluation
In the verbal evaluations of the MAPIS database website, the most frequent words used were:
On the other hand, regarding the eMARS database website, the most common words used were:
However, when evaluating ARIA database's visual attractiveness, these terms were used:
From the comments given, it can be estimated as in the case of the quantitative evaluation that the database ARIA portal is the most highly evaluated. Only one subjective comment was used in this evaluation, which had a negative connotation.
In the verbal evaluation, respondents from the MAPIS database mostly rated the portal's orientation as clear with good navigation (10 out of 16 responses). Further responses from participants were either neutral or assessed the database as unclear, with one participant rating the section headings as misleading.
Regarding the eMARS database, 9 out of 18 respondents positively evaluated the orientation, using terms such as good, clear, and not complicated. On the other hand, 6 respondents had negative feedback, while the remaining three provided neutral responses. Additionally, some respondents appreciated the use of important keywords for identifying disaster-related items within the eMARS database, in case of leaving the web portal (for example by mistake) it was difficult to return to the database web portal.
The ARIA database was evaluated positively in 10 out of 14 responses, with 4 responses remaining neutral. Respondents described the database as intuitive and clear. In some cases, respondents mentioned that the sorting of search results was clear, but the text itself could be improved. One respondent suggested adding a feature that would allow users to directly open PDF files without having to click on a link for the incident.
When assessing the search function of the MAPIS database website, some users criticised the absence of keyword search, poor filter clarity, limited chemical substance search options from a pre-set list, and inadequate search by cause. Regarding eMARS, users criticized the inability to return to previously searched results, unfriendly user interface, and a filter with relatively few filtering options. One respondent complained that during a new search, he had to reset the results of his previous search or else the search engine would not work properly. ARIA database received the least critical comments, although one critical comment concerned the non-intuitive search function.
The final listed question referred to the organization of information regarding the accident. The information organization in the MAPIS database received predominantly positive comments. One respondent identified "Inconsistent naming of events and assigning them to accident locations" and "Each function has a component in which only one option can be selected, not a group of options" as negative issues. Another respondent critically evaluated the very concise list of search results in the eMARS database, where he would appreciate short previews of the reports. In the case of the ARIA database, it was negatively evaluated that the basic text of the database is not structured and that the search results were not visually or graphically separated. On the positive side, text fragments under each search result were evaluated positively.
Another set of questions focused on evaluating the elements of web portal databases. Specifically, the following questions were asked:
Responses to the question of which element of the database is rated best can be classified into several categories, which are visible in the charts in Figure 2. The visual design is rated positively for MAPIS and eMARS, whilst the filter functions are most highly regarded for ARIA. However, respondents were highly critical of eMARS, with 30 % stating that no element was the best. Overall, it can be estimated respondents did not compare elements between different databases, but rather within a single database.
However, the results of the evaluation show the worst-rated elements, are evident in Figure 3. Regarding the MAPIS database, the "other" category predominated, where respondents primarily mentioned a small number of events in the database and no new events were regularly updated. One respondent highlighted the inability to open individual events in new browser windows as the worst aspect. A relatively large proportion of respondents did not select any worst element for the database. In the ARIA database, the category of "Other" was also prevalent. Respondents criticised slow loading of information in the database and partial translation into English.
Fig. 2: Evaluation of the Best Database Elements/Features
Fig. 3: Evaluation of the Worst Database Elements/Features
When asked "What did you miss most when using website?", the MAPIS database respondents cited the following issues: the inability to enter multiple filters during a single search, a low number of records in the database, unmarked links that were previously visited, poor keyword search capabilities, and the absence of advanced search options.
The eMARS database lacks sufficient filters; some records are too concise; the functions of the search elements are not explained adequately; the graphical user interface is of low quality; accident location; clear display of information.
In contrast, the ARIA database's respondents requested more information on accidents before clicking on the relevant details, full-text search, no structured text in report detail, faster portal response when loading results, and a clear filter arrangement.
Respondents in the MAPIS database were asked about desirable search functions: “Which search engine features would you appreciate?” They expressed a need for sorting results by chosen attributes (such as the date of an accident), distinguishing between work and industrial accidents, improved guidance, and a reduced number of options within individual search filters. In the eMARS database, respondents frequently cited improvements to filter capabilities, more advanced search functions, enhanced user-friendliness, and the ability to return to previously searched results. Meanwhile, in the ARIA database, respondents mentioned the ability to search using precise dates of incidents, changes to the page's colour scheme, the ability to input basic logical operators into searches, and structured text.
The number of participants who conducted usability evaluations varies greatly across different studies. For example, in Jeng's (2005) study, a total of 41 participants took part, while in Marzec and Piotrowski's (2023) study, 13 participants took part, and in Mortezaei and Mohammadnejad's (2022) study, 110 participants took part. However, some authors argue that a maximum of 5 participants (experts) is sufficient for usability testing. This is because they believe that including more respondents is a waste of human resources, e.g. (Nielsen, 2000).
The authors of the study selected a compromise between the time-consuming processing of study results and the number of results required for statistical analysis. As a result, a sample of 24 respondents was chosen. However, as in some cases only correct responses were quantitatively evaluated, statistical analyses could sometimes be subjected to samples containing only a few observations. In such cases, caution is necessary in interpreting the results obtained.
The time required for the task may have been affected by the fact that some participants were using a different operating system than that provided in the classrooms where the study was conducted. According to statistics for the Czech Republic, the combined market share for alternative operating systems for personal computers is approximately 13 % (MacOS, Linux, ChromeOS, or other) while Windows occupies the remainder (Statcounter, 2023). Therefore, it can be expected that the time to complete the task may have been influenced, particularly for 1-3 participants.
The participants selected to evaluate the usability were from an academic background. They were studying subjects that generally relate to risk prevention, mainly in the industry. The age of the participants ranged from 20 to 25 years old. While older age has a negative impact on information retrieval performance, appropriate strategies learned from user experience can compensate for this disadvantage (Hahnel et al., 2023). From a quantitative evaluation perspective, it is furthermore advantageous to have a group that is as homogeneous as possible (Haustein and Hunecke, 2013).
The English language proficiency levels of the respondents might have influenced both their task completion time and their mouse clicks and scrolling. This hypothesis is supported by the fact that some participants explicitly stated in their written responses that they utilized online translation dictionaries (e.g., Google, DeepL).
From Figure 1, it is evident that the assessment of knowledge by individual participants is quite variable. The question of whether the results of the study might have been influenced by the level of database knowledge of the participants is certainly relevant. Therefore, the validity of the hypothesis was tested to see if there were differences in the results of participants who rated their knowledge level as high and participants who rated their knowledge level as low. For this purpose, the outcome values were divided into two groups. One group represented the results of participants who rated their proficiency level as A or B (Group 1). The other group represented participants who rated their level as D or E (group 2). The first criterion assessed was success rate. The results are shown in Table 8. The values represent the percentage of participants' success in solving each task in the study. It is clear from the results that it cannot be said conclusively that either group is more successful than the other.
|
Question |
Succes rate (%) |
|||||
|
MAPIS |
eMARS |
ARIA |
||||
|
Group 1 |
Group 2 |
Group 1 |
Group 2 |
Group 1 |
Group 2 |
|
|
1 |
83 |
100 |
80 |
93 |
100 |
100 |
|
2 |
67 |
100 |
100 |
79 |
100 |
80 |
|
3 |
50 |
0 |
40 |
86 |
40 |
60 |
|
4 |
33 |
33 |
40 |
64 |
100 |
87 |
|
5 |
83 |
100 |
20 |
50 |
100 |
93 |
Tab. 8: Percentage success rate of participants in solving individual tasks
This is confirmed by the results of the analysis carried out in the next step. This was a test of the hypothesis whether there are statistically significant differences between Groups 1 and 2 for some of the quantitative data (Mouse Movement Route, Number of Mouse Button Clicks, Number of the Mouse Wheel Rotation, Time). Tests were performed at the α = 0.05 level of significance. No statistically significant differences were found in any of the task scores.
This is an interesting result. It seems that prior experience with individual databases may not play a significant role in achieving good search results. There are probably other factors, perhaps more significant, that influence success. For example, it may be the search strategies that a person generally chooses when searching on the internet, and which they have adopted in the past.
For the first question in the questionnaire provided in Appendix B, most respondents used a similar search strategy across all databases by entering the name of the chemical substance and the year of the accident. However, some respondents experienced difficulties in entering the specific date of the accident. Upon entering the specific date, the website did not retrieve any accidents. This may have had an impact on the time required to complete the task. There is a statistically significant difference, particularly in the eMARS database compared to other evaluated databases, where task completion time is significantly worse. Nevertheless, it should be noted that in some cases, the MAPIS database used a procedure where only the year was entered, and the accident was subsequently searched for by date. Task completion time was relatively low with this strategy. However, the MAPIS database contains relatively few records. If it contained hundreds or thousands of records, this strategy would not be effective. Additionally, the eMARS database has significantly poorer measurements for the number of mouse wheel rotations and route length. These differences may be due to the way how information is organized on the eMARS website, which was chosen by the website creators.
Similar strategies to those used in Question no. 1 of the questionnaire were also employed in Question 2. Respondents entered a combination of city name and year of the accident into the search engine. The statistical analysis shows that there are statistically significant differences between the ARIA database and other databases in the measured parameters: time and number of mouse wheel rotations. This is likely due to the Toulouse accident record containing a PDF report. Most respondents searched for the cause of the accident in the report, which made this task more time-consuming. According to Son et al. (2023), frequent scrolling of the mouse wheel induces physical fatigue and affects user performance. Additionally, scrolling can evoke negative emotions in users (Šola et al., 2023).
Conversely, the MAPIS database, alternatively eMARS, includes the cause category directly in the message body. This allowed respondents to quickly orient themselves in the text and answer the question in a relatively short time. However, the average values of these parameters are higher compared to MAPIS in the eMARS database. This supports the hypothesis mentioned above that the way information is sorted in eMARS prolongs search time.
Question 3 had the lowest rate of correct answers in the MAPIS database. If a correct answer was recorded, respondents likely went through all the accidents contained in the database one by one. This strategy is highly inefficient in the case of a database containing thousands of records. Search fails with this type of task, and the filters used. One comment from a respondent is telling:
„In my opinion, it was the most challenging task within this database. I couldn't find the appropriate filters and keywords, and locating the overflow-caused accident was difficult for me…“.
However, it is relatively interesting that the average values of the measured quantities like time are comparable to other databases. No statistically significant difference was found for other measured quantities. Knowledge of technical English terms is necessary when dealing with the eMARS and ARIA databases. Some participants relied on online translation dictionaries, which naturally increased the time required for the task.
Question 4 was very similar to question 3. The success rate among participants was again very low when using the MAPIS database. However, the measured values were again very similar to those in the MAPIS or ARIA databases. If a participant was successful using MAPIS, he used pre-set filters. However, most respondents did not notice that these filters were a part of the website or were unable to use them effectively. To facilitate successful searches, it would be helpful if databases on web portals contained instructions.
For question 5 in the MAPIS database, respondents searched for the accident event of "explosion" or gradually sorted accidents according to different keywords. Alternatively, respondents directly ranked accidents according to consequences. Respondents in the eMARS database assisted themselves by using the keywords "explosion" and "killed", and records were gradually searched. It is evident that this strategy is very time-consuming, which is also reflected in the average time spent solving the task, which was significantly higher than in the MAPIS or ARIA databases. The same applies to other measured variables. In the ARIA database, the task was solved using a filter related to consequences.
Finally, it is worth noting that during the study of ARIA's web portal database, the longest response time of server was recorded for the given request. Naturally, this will have a negative impact on the overall task completion time.
From the usability assessment mentioned above, several elements can be identified that could facilitate at least searching, namely:
The mentioned aspects appear to be common among many websites; however, web portals with databases of industrial accidents did not include these functionalities. The results of study show that structured text can greatly aid in orienting oneself within it.
The authors recommend that information about the event be presented in a structured form directly on the website, followed by a PDF file containing more detailed information. The information structure of a PDF file may draw inspiration from safety communication principles that comprise three layers (Larkin and Larkin, 2007): a simple image, uncomplicated text, and technically detailed information that stand on its own but are available for those who wish to delve deeper into the subject. It would be appropriate, however, for the full-text search to be functional also for PDF files that accompany a record. It is appropriate to consider the use of tables, as these data may be ignored when searching (Barakhnin et al., 2023).
The authors believe that using the national language enables users to interpret information more easily and place it into context. Therefore, they suggest operating a national database in a combination of the national language and English, similar to the ARIA database.
According to the authors, it is also important to provide guidance on the functions of each element of the database, such as filters and search functions, among others, to enhance user confidence in the search results. Unfortunately, neither the ARIA nor the eMARS databases provide such guidance. Database MAPIS does have a "Help" section, but only general information is included in this help.
One of the most important principles of safety management is the ability to learn from previous incidents. This process involves a series of steps, from gathering and disseminating information to exchanging experiences. An event database can significantly contribute to the exchange of experience regarding accidents among experts from various industrial sectors. The quality of information access, which is determined by the appropriate structure of information and high-quality website functionalities (filters, full-text search, and graphic design), are crucial and important attributes that influence users' abilities to comprehend and contextualize information.
The design of web portals and their functionalities tested in a study are reflective of the time when individual portals were created. However, this poses difficulties for current users searching for information who are accustomed to using search strategies from other services, which may fail when searching in databases for industrial accidents. However, users may not realise that there has been a failure in the search strategy. They may consider the negative search outcome to be accurate in the sense that they believe the specified accident simply does not exist in the database (even though the opposite is true). The reason for this is clear, the search strategies (such as using a suitable text string, Boolean operators and full-text search, functional filters, etc.) they use are considered a standard.
The study results suggest proposals for enhancing the usability of web-based accident databases, which can be implemented during designing new or modifying existing databases. Particularly, when creating new databases, it is also appropriate to utilize the experiences from previous databases and propose a suitable information structure. A suitable information structure will not only enhance search efficiency but also enable or facilitate the analysis of accumulated data using the modern algorithms.
This contribution was created with the support of project TAČR SS05010096, SAFE-BASE: Design of a comprehensive system for the learning process from serious accidents involving a dangerous chemical substance or mixture.
No potential conflict of interest was reported by the author(s).
AL-AYASH, A. ...[et al.]. 2016. The Influence of Color on Student Emotion, Heart Rate, and Performance in Learning Environments. Wiley InterScience. 41(2), 196-205. https://doi.org./10.1002/col.21949.
ARIA. 2025. Analyse, Recherche et Information sur les Accidents. Available from: https://www.aria.developpement-durable.gouv.fr/.
BARAKHNIN, V. ...[et al.]. 2023. TableProcessor: The Tool for the Analysis and the Interpretation of Web Tables to Create the Geo Knowledge Base of Kazakhstan. In: Artificial Intelligence in Models, Methods and Applications. Cham: Springer. (Studies in Systems, Decision and Control, vol 457). https://doi.org/10.1007/978-3-031-22938-1_15.
CURCIO, K. ...[et al.]. 2019. Usability in agile software development: a tertiary study. Computer Standards & Interfaces. 64, 61–77. https://doi.org/10.1016/j.csi.2018.12.003.
DRUPSTEEN, L.; J. GROENEWEG & G.I.J.M. ZWETSLOOT. 2013. Critical Steps in Learning From Incidents: Using Learning Potential in the Process From Reporting an Incident to Accident Prevention. International Journal of Occupational Safety and Ergonomics. 19, 63–77. https://doi.org/10.1080/10803548.2013.11076966.
eMARS. 2025. The Major Accident Reporting System. Available from: https://emars.jrc.ec.europa.eu/en/emars/accident/search.
FERNANDEZ, A.; E. INSFRAN & S. ABRAHAO. 2011. Usability evaluation methods for the web: a systematic mapping study. Information and Software Technology, Advances in functional size measurement and effort estimation - Extended best papers. 53, 789–817. https://doi.org/10.1016/j.infsof.2011.02.007.
HAHNEL, C.; U. KROEHNE & F. GOLDHAMMER. 2023. Rule-based process indicators of information processing explain performance differences. PIAAC web search tasks. Large-scale Assessments in Education. 11 (16). https://doi.org/10.1186/s40536-023-00169-5.
HAUSTEIN, S. & M. HUNECKE. 2013. Identifying target groups for environmentally sustainable transport: assessment of different segmentation approaches. Curr. Opin. Environ. Sustain. 5 (2), 197–204.
ISO 9241-11. 2008. Ergonomics of human-system interaction - Part 11: Usability: Definitions and concepts.
ISO/IEC 25010. 2011. Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models.
JENG, J. 2005. Usability Assessment of Academic Digital Libraries: Effectiveness, Efficiency, Satisfaction, and Learnability. Libri. 55 (2-3), 96–121. https://doi.org/10.1515/LIBR.2005.96.
JIAWEI, H. & M. KAMBER. 2012. Data Mining: concepts and techniques. 3rd ed. Burnaby: University of Simon Fraser. ISBN 978-0-12-381479-1.
KIRCHSTEIGER, C.; A. RUSHTON & N. KAWKA. 1999. A text retrieval method for the European Commission’s MARS database: selecting human error related accidents. Safety Science. 32, 71–91. https://doi.org/10.1016/S0925-7535(99)00012-0.
KJELLÉN, U. 2000. Prevention of Accidents through Experience Feedback. London: Taylor & Francis.
LARKIN, T.J. & S. LARKIN. 2007. You Know Safety. But Admit It.....You Don’t Know Communication: Fixing Safety Communication in Oil Refineries. Larkin Communication Consulting. Available from: http://www.larkin.biz/data/Fixing_Safety_Communication-English.pdf.
LE COZE, J.C. 2013. What have we learned about learning from accidents? Post-disasters reflections. Safety Science. 51, 441–453. https://doi.org/10.1016/j.ssci.2012.07.007.
LITTLEJOHN, A., ...[et al.]. 2017. Learning from Incidents Questionnaire (LFIQ): the validation of an instrument designed to measure the quality of learning from incidents in organisations. Safety Science: Learning from Incidents. 99, 80–93. https://doi.org/10.1016/j.ssci.2017.02.005.
LUNDBERG, J. ...[et al.]. 2010. What you find is not always what you fix: how other aspects than causes of accidents decide recommendations for remedial actions. Accident Analysis & Prevention. 42 (6), 2132-2139. https://doi.org/10.1016/j.aap.2010.07.003.
MANNERING, F.L. & C.R. BHAT. 2014. Analytic methods in accident research: methodological frontier and future directions. Analytic Methods in Accident Research. 1, 1-22. https://doi.org/10.1016/j.amar.2013.09.001.
MAPIS. 2025. MAPIS: Database of adverse events. Výzkumný institut práce a saociálních věcí. Available from: https://mapis.rilsa.cz/DMU/ClanekDetail.aspx?guidso=a23e3ce2-1159-49a7-a275-4a13843c845d.
MARZEC, P. & D.M. PIOTROWSKI. 2023. Remote usability testing carried out during the COVID-19 pandemic on the example of Primo VE implementation in an Academic Library. The Journal of Academic Librarianship. 49, 102700. https://doi.org/10.1016/j.acalib.2023.102700.
MORTEZAEI, S. & E. MOHAMMADNEJAD. 2022. Usability Evaluation of a Military Medical Center’s Hospital Information System Based on ISO 9241. Journal of Police Medicine. 11, 1–14. https://doi.org/10.30505/11.1.16.
NIELSEN, J. 2000. Why You Only Need to Test with 5 Users. NN/g [online]. Nielsen Norman Group [cit. 2023-06-08]. Available from: https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/.
SON, S. ...[et al.]. 2023. TouchWheel: Enabling Flick-and-Stop Interaction on the Mouse Wheel. International Journal of Human–Computer Interaction. 40 (13), 3539-3551. https://doi.org/10.1080/10447318.2023.2190259.
ŠOLA, H.M.; F.H. QURESHI & S. KHAWAJA. 2023. Eye-tracking Analysis: College Website Visual Impact on Emotional Responses Reflected on Subconscious Preferences. (IJACSA) International Journal of Advanced Computer Science and Applications. 14 (1). DOI: 10.14569/IJACSA.2023.0140101.
STATCOUNTER. 2023. Desktop Operating System Market Share in Czech Republic. Statcounter [online]. May 2023 [cit. 2023-06-08]. Available from: https://gs.statcounter.com/os-market-share/desktop/czech-republic.
TIBCO. 2020. TIBCO Statistica 14 [online]. Palo Alto: TIBCO Software. Available from: https://www.tibco.com/.
WEIBULL, B.; C. FREDSTROM & M.H. WOOD. 2020. Learning lessons from accidents: key points and conclusions for inspectors of major chemical hazard sites. Luxembourgh: Publications Office of the European Union. (Seveso inspection series publication).
WRONIKOWSKA, M.W. ...[et al.]. 2021. Systematic review of applied usability metrics within usability evaluation methods for hospital electronic healthcare record systems. Journal of Evaluation in Clinical Practice. 27, 1403–1416. https://doi.org/10.1111/jep.13582.
Vzorová citace
TRÁVNÍČEK, Petr ...[et al.]. Evaluation of the usability of selected databases of industrial accidents. Časopis výzkumu a aplikací v profesionální bezpečnosti [online]. 2025, roč. 18, č. 1-2. Dostupný z: https://www.josra.cz/vydani/clanek/evaluation-of-the-usability-of-selected-databases-of-industrial-accidents. ISSN 1803-3687.
Výzkumný institut práce a sociálních věcí, v. v. i.
Jeruzalémská 1283/9
110 00 Praha 1 - Nové Město
IČO: 00025950
Datová schránka: yi6jvet
DIČ: CZ00025950