The WHO recognises the importance of school meals and recommends school meal policies as a way to improve public health(1). While many countries offer school meals, contexts and policies vary greatly(Reference Bonsmann2–Reference Cohen, Hecht and McLoughlin6). However, one common issue is a lack of monitoring and evaluation(Reference Kovacs, Messing and Sandu5,Reference Nelson and Breda7,Reference Lucas, Patterson and Sacks8) , particularly in high-income countries(3). This in turn limits the evidence base for school meal policies, which in turn can hamper the spread of good or improved practice.
Universal policies and long-standing practices are particularly challenging to evaluate(Reference Kovacs, Messing and Sandu5), and this is exemplified by Sweden where universal school meals have a long history (see Box 1). Sweden is almost unique in providing school lunches free of charge to all primary school pupils – ages 6–16 – regardless of the economic circumstances of the family, or whether the school is publicly (i.e. municipality) or privately run. Yet according to a recent overview of policies in selected European countries(Reference Kovacs, Messing and Sandu5), neither Sweden nor Finland, which has a very similar school meal system(Reference Pellikka, Manninen and Taivalmaa9), would have met that review’s core criteria for ‘good practice’, as both countries lack an official system for monitoring and evaluation. In Sweden, new legislation on education came into effect in 2011 explicitly stating for the first time that school meals should be ‘nutritious’ (see Box 1). This provided a new opportunity for a policy evaluation, but none was officially planned. No further clarification of ‘nutritious’ was provided but the School Inspection Agency, who has the task of following up implementation of all aspects of the school law, interpreted it to mean that meals should be in line with Swedish (now Nordic) nutritional recommendations, and that schools should include meals in their systematic routines for quality control(10).
Late 19th century: School lunches are provided piecemeal, as a way of counteracting poverty
1946: National policy is introduced to subsidise meals if local authorities chose to provide them
1970s: Implementation of school lunches is now widespread
1998: Education Act 1997 comes into force; school meals to be provided to all, ‘free of charge’
2011: Education Act 2010 comes into force and adds that school meals should be ‘nutritious’
For a more detailed history, see Lundborg et al. (Reference Lundborg, Rooth and Alex-Petersen29).
As no official monitoring or evaluation was planned, in 2010 a tool was developed by researchers to allow schools complete a self-assessment (audit) of their school meal situation. One aim of the tool was to build up a database and to evaluate any changes in nutritional quality after the new policy. Another aim was to support schools and municipalities in their attempts to improve overall school meal quality, by providing them with automatic tailored feedback(Reference Patterson, Quetel and Lilja11). Audit and feedback is an implementation strategy defined as ‘a summary of […] performance over a specified period of time, given […] in a written, electronic or verbal format. The summary may include recommendations for […] action’(12). Self-evaluation is considered a useful tool in the field of school effectiveness(Reference Chapman and Sammons13), and a Cochrane review found evidence for the effectiveness of audit and feedback for improving practice in the healthcare setting(Reference Ivers, Jamtvedt and Flottorp14). However, another Cochrane review of strategies used to enhance implementation of school-based policies or practices – targeting risk factors such as nutrition, physical activity or tobacco – found that audit and feedback was rarely utilised in this setting(Reference Wolfenden, Nathan and Sutherland15).
Using a pre-post study design, between spring 2011 and spring 2013, the initial effects of the policy were evaluated in a randomly selected, nationally representative, sample of schools. That study found significant but modest improvements in nutritional quality, defined as schools meeting nutritional criteria(Reference Patterson and Elinder16). In the present study, using the same outcome as the previous study, we wanted to examine the effects of the policy over a longer time period, as well as, for the first time, evaluate the effect of repeatedly using the tool. The aims of the current study were therefore to describe the changes in nutritional quality of school lunches offered in Swedish primary schools in the eight years following a change to national legislation (research question [RQ] 1) and to examine if repeated use of a self-audit tool was associated with improvements (RQ2).
Methods
The School Food Sweden tool
The development and validation of the tool is described in more detail elsewhere(Reference Patterson, Quetel and Lilja11). Briefly, a stakeholder group helped identify six important domains of school meal quality – provision/choice, nutritional quality, food safety issues, service and pedagogy, environmental and organisational aspects – and a web-based tool to measure each of these was developed. Following pilot-testing, validation studies and further improvements such as automatic feedback reports, the tool was made freely available for all primary schools in March 2012. The tool consists of two parts: questionnaires – one per domain – plus feedback in the form of a tailored report. The questionnaires are free standing and can be answered in any order; all except the domain ‘provision/choice’ are optional in order to generate a report. The report is a pedagogically designed PDF document, about twenty pages long. It includes summary statistics and clear explanations of why each domain and sub-domain is important; all domains are included, even if not yet completed. The score for each question is shown using a traffic-light colour system to indicate what results are currently good and what could be improved. Schools can contact the administrator of the tool by email or phone if they have questions, but no support, follow-up or other feedback is offered as standard. Schools can use any part of the tool as often as they wish, without limitations. Any member of staff can complete any questionnaire, but more often than not the nutritional quality domain is completed by the school kitchen manager. When the school is ready, they click a button to create and download their tailored feedback.
Setting, recruitment and study design
Guidelines for school meals are produced by the Swedish Food Agency, a government agency. Guidelines were first issued in 2007; a major revision was published in 2013(17). The guidelines state that meals are expected to meet nutritional recommendations over a four-week period, and include general information about foods to promote, but no standards or rules. In fact, in the most recent revision 2018(18), suggestions of food servings were toned down even more, in order to emphasise the importance of schools themselves having the appropriate knowledge and competence to take a common-sense and holistic approach. This non-prescriptive approach is possible in part because of the long tradition of school meals – even today a school lunch consists of a cooked meal, a salad buffet and crispbread with spread and milk/water. Deep-fried foods have never been a feature, nor have desserts or soft drinks. In the majority of schools, a choice of two or more warm meals is offered, and these days one of those options is very often vegetarian. Food is prepared freshly, either on-site (by municipal or private catering) at a nearby school or at a central municipal kitchen. School cafeterias are common, but while the offering is generally less healthy it is not free of charge, and vending machines are very rare. Pupils are not permitted to bring food from home, but teenagers may generally leave the school premises. Dietary requirements on medical, disability or religious grounds must be accommodated while dietary preferences on ethical or other grounds may be accommodated, if deemed practical. Pupils generally serve themselves and eat in a canteen; teachers are usually present and are encouraged to use the meal as an opportunity to interact with pupils, the ‘pedagogical lunch’(Reference Persson Osowski, Goranzon and Fjellstrom19). The guidelines also emphasise the need to consider other aspects of meal quality, such as the importance of a pleasant meal environment, allowing adequate time to eat, and how school meals have the potential to be integrated with other educational activities(18).
The study population was all primary schools that used the tool between the launch date 29 March 2012 and 31 July 2019. Schools self-select to use the tool, although some public schools may be directed to do so by their municipality. Schools are not invited, and any contact with them prior to sign up is usually indirect – e.g. they have seen the tool mentioned in guidelines, the project manager for the tool has had contact with a municipality or region, or with relevant organisations and government agencies, the tool has been mentioned at relevant meetings/conferences etc. A municipality-level account function that can provide an overview of school account activity and create municipality-level reports was added in 2016.
To examine changes over time (RQ1), we used a repeated cross-sectional design. If a school performed more than one audit of nutritional quality during a school year (defined here as 1 August–31 July), only the most recent was included when reporting that school year’s results at group level. To compare the results of repeated audits (RQ2), we used a longitudinal open cohort design. Due to pilot testing and the pre-post study, some schools had used the tool before the launch date, when automatic feedback was not yet in place. We restricted the analysis in RQ2 to schools that had only ever been exposed to the complete version of the tool, i.e. who had first completed an audit of nutritional quality after the launch date.
Data collection
Nutritional quality
The nutritional quality questionnaire assesses the adequacy of a school’s four-week lunch menu in terms of four nutritional aspects: iron, fibre, vitamin D and fat quality. These four were chosen to focus on nutrients of importance for children that are not easily met(Reference Barbieri, Pearson and Becker20), including in school lunches(Reference Persson Osowski, Lindroos and Enghardt Barbieri21,Reference Colombo, Patterson and Elinder22) , while keeping the questionnaire as brief as possible. The questionnaire includes questions about the serving frequency of both rich and/or common food sources of these nutrients over a four-week period. All data are self-reported by schools. The answers are scored and compared with validated criteria for the four nutrients(Reference Patterson, Quetel and Lilja11). If the criteria for all four are met, the school menu is classified as ‘likely to meet nutritional recommendations’, in the current study referred to as ‘meeting nutritional criteria’, the primary outcome. All other results are combined as ‘not meeting nutritional criteria’. Where two audits had been conducted very close together (within 28 d), the later was excluded, on the assumption that this was unlikely to reflect meaningful changes and could signal that the school was testing the effect of alternative answers.
Active use of the tool
We extracted data on when and how often the school had performed the audit(s) of nutritional quality, as well as the number of days that had elapsed since any previous audit, and whether feedback (a report with results) had been generated. Some reports are never generated, due to lack of awareness, lack of interest or perhaps technical difficulties and we cannot see if reports have been opened/read. We calculated the proportion of times a school had generated reports and categorised this as sometimes (0–50 % of occasions), mostly (51–89 % of occasions) and almost always (90–100 % of occasions). For public schools, we also noted if and when the municipality had created an account. This variable was included as a proxy for how interested the municipality was in the tool, although this could either signal that schools had support when using it or, conversely, that they were merely under external pressure.
School characteristics
Data on schools were extracted from a national database(23), namely: the number of pupils, the owner of the school (municipality or private) and the location of the school. As measures of the school’s socio-economic position, we used the proportion of students with parents with higher education (>12 years of education), as well as the proportion with a foreign background (pupil or both parents born outside Sweden). Occasionally, data were missing, or, if less than 10 pupils in a category, not published. In the latter case, we imputed it as five. School size was categorised into three categories (≤200 pupils, 201–400 pupils and >400 pupils. Geographical location in Sweden was coded as east, south or north, according to one of the definitions used by Statistics Sweden.
Statistical analysis
For the cross-sectional study (RQ1), the proportion of schools meeting the criteria for nutritional quality each school year was compared and a binary logistic regression was performed to see if school year was a significant predictor. For RQ2, to investigate whether schools with more audits were more likely to meet the nutritional criteria than those with fewer, several analyses were performed. First, we grouped audits from all schools by audit order (i.e. all first audits, all second audits, etc) and compared the proportion of schools meeting the criteria across all groups, calculating the average results and the average change from the preceding audit. Second, as selection bias was a potential concern, i.e. schools that went on to use this tool many times might differ from ones that only used the tool once, we repeated the current analysis, stratifying schools according to the total number of audits performed, to see if the pattern held. Schools with more than nine audits were excluded due to very small numbers (n 13, 1 %).
Third, we performed a subgroup analysis that allowed us to control for potential confounders, using variance weighted least squares (VWLS) regression. This model, sometimes referred to as meta-regression, extends simple linear regression to consider the outcome as an estimated quantity that can be averaged, rather than a simple observation. For each subgroup (audits grouped by audit order), the variances of the outcome variable are estimated and assumed independent of the other subgroups. Then the model treats each subgroup as one observation, weighted with the estimated variance. In general, the outcome variable can be seen as an estimate and the explanatory variables as confounders observed at subgroup level that might influence the average of the ‘intervention’ effect. Here, we estimated the continued effect of total number of audits with and without the potential confounders included in the models. The confounders controlled for in the models were distribution of region, proportion of private schools and average size of the schools.
Finally, as the tool consists of two components – an audit component plus a feedback component – we wanted to consider both as independent predictors. Logistic regressions were performed to test if the odds of meeting the nutritional criteria was predicted by a) the number of occasions a school evaluated its nutritional quality or b) the proportion of occasions a school generated its previous feedback. (We first checked there was no evidence of a correlation between number of audits performed and percent of all feedback generated; Spearman’s rho 0·019.) Potential confounding factors in both regression models were audit date (expressed as months since March 2012), school characteristics and, for public schools, whether the municipality had an account by the time of the school’s final audit. Statistical significance was set to a level of 0·05. Analysis was performed in IBM SPSS Statistics for Windows (version 26), except for the variance weighted least squares which was performed in Stata Statistical software (version 16·1).
Results
Use of the tool
By July 2019, 2206 primary schools had created an account, corresponding to 45 % of all primary schools in Sweden that year (ca 4800) (Table 1). Additionally, 50 % of the country’s 290 municipalities had created a municipal-level account. During the 7-year period from launch spring 2012 to end of school year 2018/19, 1500 schools audited nutritional quality at least once. These schools came from 223 of the country’s 290 municipalities. In total, 4141 audits of nutritional quality were made during this period; 894 schools (57 %) performed two or more audits. For RQ2, 190 schools were excluded, as they had first used the tool before the report function was available; 1310 schools remained. Schools using the tool were not representative of all schools nationally. They tended to be larger, were more likely to be publicly run, and more likely to be from the eastern region of Sweden (Table 1). This pattern remained relatively stable, making it reasonable to compare trends over time.
* Operating in 2017/18.
† Excluding schools which began to use the tool prior to March 2012.
‡ For municipally run schools only.
§ N can vary due to missing data so the n for which data are available is given if different from the n in the header.
¶ Missing data for higher education: 120 of 1500 (of which 91 due to difficulties locating data, nine lacked data, twenty had less than ten pupils with this characteristic and so data are not made public (imputed as 5)).
** Missing data for foreign background: 415 of 1500 (of which 91 due to difficulties locating data, 50 lacked data, 274 had less than ten pupils with this characteristic and so data are not made public (imputed as 5)).
Changes over time
Many schools had difficulty meeting the nutritional criteria for school meals (Table 2). However, the cross-sectional results (RQ1) showed the proportion increased significantly with each passing school year, from 11 % in the first full school year of operation 2012/13 to 34 % in 2018/19 (Table 2). Of the four nutrients included in the tool, schools had most difficulty reaching the requirements for vitamin D and fat quality, while requirements for fibre and iron were met by most (data not shown). As schools included in these yearly cross-sectional datasets included both schools performing an audit for the first time and those that were repeat users, we examined if this positive trend was also present among first-time users only, who could not have been affected by previous experience with the tool. No such clear trend was seen, and the variation from year-to-year was high (Table 2).
ST, spring term.
† Includes ninety-four schools participating in the pre/post study from 2011 and 2013 (i.e. not self-selected).
‡ All four nutritional criteria, based on a school’s final audit for that school year.
§ Schools who had completed an audit prior to launch date were excluded.
* Significantly different from reference year 2012/13, the first complete school year: P < 0·01.
Changes following use of the tool
This longitudinal analysis was restricted to the schools that only ever had access to the complete tool, i.e. first used it March 2012 or later (n 1310). Over half audited nutritional quality more than once (59 %, n 774). The median length of time between all audits was 367 d (inter-quartile range: 267–502 d). For schools with more than one audit, the proportion meeting all four nutritional criteria on the first audit was 24·5 %, while the proportion meeting the criteria at their final (most recent) audit was 31·6 %.
To investigate whether schools with more audits were more likely to meet the nutritional criteria than those with fewer (RQ2), several analyses were performed. First, the proportion meeting all criteria at each audit, grouped by audit order, is presented. The bars in Fig. 1 show an overall trend towards improved outcomes by higher audit order. Second, because schools that went on to use the tool repeatedly were more likely to have had better results on the first audit than schools that only ever performed one audit (14·3 % met the criteria v. 9·3 %), we stratified schools according to the total number of audits conducted, plotted as lines in Fig. 1. The lines also suggest an overall trend towards improved results, regardless of stratum, although there is a lot of variation, particularly in the strata with most audits due to small numbers.
Third, the results of the variance weighted least squares regression subgroup analysis that allowed adjustment for confounders, showed similar patterns as the results presented in Fig. 1. The estimates showed an increase in average proportion meeting nutritional criteria with increasing number of total audits (data not shown). When comparing the models with and without the potential confounders of school characteristics (distribution of region, proportion of private schools and average size of the schools), no strong indication of confounding effect was observed.
Finally, the results of the logistic regressions show the relationship between a) the total number of audits a school had completed and b) how often (% of times) the school had generated their prior reports, on the likelihood of the school meeting nutritional quality at its final audit. These results are presented in Tables 3 and 4, respectively. In Table 3, with schools with just one audit in total as the reference category, for each increasing number of total audits completed, schools increased their odds of meeting nutritional criteria at their final – most recent – audit by 1·38 (CI 1·30, 1·48, model 1). After controlling for geographical region and audit date, the OR was 1·30 (CI 1·20, 1·41, model 2). When restricting the analysis to 774 schools with repeated uses (and schools with two audits in total as the reference category), results from models 1 and 2 were similar to those for all schools. Model 3 included a variable relevant only to schools with repeated audits, namely the number of days (≥28) that had elapsed since the previous audit. The OR for the final model 3 was 1·26 (CI 1·12, 1·41). Neither the owner of the school, the proportion of pupils with foreign background nor parents with higher education were significant predictors in the models. For municipal schools, we also considered whether the municipality had an account by the time of the school’s final audit, but this was not significant either and was therefore excluded so results could be presented for schools regardless of owner.
* Unadjusted.
† Model 1 adjusted for region and time since launch.
‡ Model 2 adjusted for days passed since previous audit (only relevant for repeat users).
* Model 1: unadjusted.
† Model 2: Model 1 adjusted for region and time since launch.
‡ Model 3: Model 2 adjusted for days passed since previous audit.
In Table 4, with schools that accessed their previous audit results (i.e. generated their report) only sometimes as the reference category, schools that accessed their prior results almost always were more likely to meet the nutritional criteria: the OR ranged from unadjusted 2·40 (1·48–3·88) to adjusted 2·02 (1·23–3·31, model 3 adjusted as before). The OR in Table 4 were higher than those in Table 3, suggesting accessing feedback was an even stronger predictor of meeting nutritional quality than number of audits.
Discussion
The findings suggest that both time elapsed since the adoption of a legal requirement for school lunches to be nutritious and repeated use of the School Food Sweden tool, a self-administered audit and feedback tool, exerted an influence on school meal quality in Sweden between 2012 and 2019. Disentangling the two instruments is, however, a challenge due to their universal nature.
Evidence for an effect of the 2011 policy includes the fact that in repeated cross-sectional analyses, the proportion of all schools meeting nutritional criteria increased with each passing year, from 11 % in 2012/13 to 34 % in 2018/19. This extends the results of previous work, where using a pre-post study design, but where no feedback was provided, modest improvements were found in nutritional quality two years after the legislation(Reference Patterson and Elinder16). Legislation is one of the more powerful instruments available to promote behavioural changes(Reference Mozaffarian, Angell and Lang24) and can often give rise to ripple effects – activities and initiatives by other important stakeholders. Some early examples have been described(Reference Patterson and Elinder16), such as the founding of the National Centre for Public Meals (NCPM) at the Swedish Food Agency in 2011. The NCPM overhauled guidelines for school meals in 2013(17) and since then, the guidelines have been disseminated widely. The centre has also undertaken surveys that show that the proportion of municipalities with an official policy document adopted by local politicians for school meals has risen from 45 % in 2011, to 74 % in 2016, and 85 % in 2018(Reference Grausne and Quetel25), and that the vast majority refer to the national guidelines.
However, the repeated cross-sectional analyses showing improvements in quality over time included both schools using the tool for the first time as well as repeat users, so if the tool had an effect it would influence this observation. The proportion of schools meeting nutritional criteria with each passing year did not increase as clearly for schools who were using the tool for the first time, something which would have been expected if time since introduction of the policy was the only factor. Either schools that started using the tool later are different in some way, perhaps with greater needs (a form of selection bias), or the policy effect is smaller than expected, or maybe even waning. Further evidence for the effect of the tool includes the fact that we also saw a dose–response effect, where schools that had used the tool more often had better results than those that used it less often, and furthermore achieved better results when they used both components of the tool – audit together with feedback. In our analysis of factors associated with both improvement in and meeting nutritional criteria, repeated use of the tool stood out as a predictor, even when controlling for other important variables known to be associated with nutritional quality, including time since introduction of the policy. These point to the effectiveness of the tool to improve nutritional quality, rather than the policy alone.
This is not to say that the policy had little effect. As mentioned, the policy led to initiatives and increased attention on school food quality, so the take-up of the tool would likely have been lower if not for the policy. On the other hand, if a policy is not carefully evaluated, it is difficult to be sure of its effects. And without follow-up or consequences for non-compliance, effects may be limited. At school level, the inclusion of school meals in internal quality management systems, as also required by the new law, is quite low. By 2016, only half of schools surveyed by Olsson and Waling had done so(Reference Olsson and Waling26). Of municipalities with local policy documents, only 58 % had followed these up within the previous three years(Reference Grausne and Quetel25). Poor evaluation and monitoring is a common and persistent problem in the field of school meal policy(Reference Kovacs, Messing and Sandu5,Reference Nelson and Breda7,Reference Chambers, Boydell and Ford27) and means that good practice and/or lessons learned may be missed. Evaluations of the effects of truly universal meal policies are particularly challenging and are, unsurprisingly, rare(Reference Kovacs, Messing and Sandu5). A recent systematic review of ‘universal’ school meals – both breakfast and lunch – has been conducted by Cohen et al. (Reference Cohen, Hecht and McLoughlin6). They identified studies that predominantly utilised pre-post designs, or where ‘universal’ was limited to a group of schools in a country (not all schools, as in the present study), or in the one case where it was universal – Japan – analysis was cross-sectional. Long-term evaluations are even rarer. For example, the longest follow-up by far in a systematic review of the impact of school food environment policies on actual dietary intakes was 60 months(Reference Micha, Karageorgou and Bakogianni28). One noteworthy exception is a Swedish study, where economists found that adults who had attended school at a time when free school lunches were becoming widespread in the 50s and 60s and received them during all nine primary school years benefited from a 3 % increase in lifetime earnings, and this effect was greater for those who had come from poorer households(Reference Lundborg, Rooth and Alex-Petersen29). That analysis could not take quality into account, and the effects of meals in well-nourished populations are probably less dramatic(Reference Ells, Hillier and Shucksmith30), but the finding that inequalities can be dampened via school meals is relevant even today(Reference Colombo, Patterson and Elinder22,Reference Kristjansson, Robinson and Petticrew31) .
The implementation strategy of audit and feedback is considered very effective to support change, at least in the healthcare setting(Reference Ivers, Jamtvedt and Flottorp14). It appears to be less commonly used to enhance implementation of school-based health-related policies(Reference Wolfenden, Nathan and Sutherland15). Evaluations of the effectiveness of such tools and providing feedback remotely – fully automated, without in-person follow-up, as in our study – are rare. One school canteen-based audit and feedback randomised controlled trial has been conducted in Australia(Reference Yoong, Nathan and Wolfenden32). Compared with our tool, this was a relatively intensive intervention; the main component being a menu audit, with initial face-to-face contact, and subsequent provision of feedback via a written report and telephone call up to four times over a 12-month period. Although no evidence was found for an improvement in the primary outcomes (proportion of schools with a menu that did not include discouraged foods/beverages, and the proportion where encouraged items made up the majority of the menu), intervention schools offered fewer discouraged items. The intervention has been modified and tested again at scale with positive results(Reference Reilly, Nathan and Wiggers33). In the Netherlands, the Canteen Scan tool(34) has been developed with the aim of assessing compliance with the Dutch Guidelines for Healthier Canteens. This also provides automatic and tailored feedback. In a six-month quasi-experimental controlled trial, improvements in the food environment were noted, but not in pupil purchasing behaviour(Reference Evenhuis, Jacobs and Vyth35). Again, feedback was not provided remotely, as it was in our case. Otherwise there appear to be few other tools similar to the School Food Sweden tool in terms of function and level of automation, but this may be partly because of difficulties in identifying tools that are not well described in the scientific literature. Two relevant systematic reviews have recently been published. Cupertino et al. (Reference Cupertino, Maynard and Queiroz36) identified sixteen instruments (including this one) that have been developed to evaluate school menus. The authors did not assess validity and/or reliability. The majority were not published in English and only one(Reference Martins Rodrigues, Giordani Bastos and Stangherlin Cantarelli37) briefly mentioned that software had been developed to automate checklists and provide a PDF of results. O’Halloran et al. (Reference O’Halloran, Eksteen and Gebremariam38) reviewed thirty-eight measurement methods which have been used to assess school food environments, of which one-third measured data self-reported at school level. Of these, none of these methods appeared to be designed for use in an ongoing manner, several focused on attitudes and beliefs, and vanishingly few had investigated validity and/or reliability.
Strengths and limitations
This data set is unique and the long period of time covered is a strength. Although the time period presented here begins after the legislation came into force, the pre- to post-period has been examined separately(Reference Patterson and Elinder16). The tool appears unusual in its degree of automation, requiring little contact with schools, increasing feasibility. The validity and reliability of the tool and the criteria used to assess the nutritional quality have also been described(Reference Patterson, Quetel and Lilja11). Schools using the tool were not representative of all schools nationally; however, this remained relatively stable, making it reasonable to compare trends over time. The biggest limitations of the study are the self-reported data and the lack of control schools. There is no real incentive for schools not to report accurately as there are no clear consequences for poor results, and the tool is clearly presented as an aid to improvement rather than as a means of control. Still, desirability bias is a common phenomenon and cannot be ruled out. As the policy was national, it was not possible to have control schools that were unexposed to it. As regards schools that were ‘unexposed’ to the tool, we know that they differ with regard to structural factors (e.g. size, owner and region), but we cannot know if the nutritional quality is different. Are schools that decide to use the tool in greater need of help (but maybe less engaged), or do they have better resources (and maybe more engaged), or a mixture? This introduces self-selection bias and unbalanced confounders in estimating the effect of the tool. The effect of using the tool may overstate or understate the true effect. To try to compensate, we explored the question from numerous angles, both at audit level and school level. On the assumption that schools that use it more frequently are more willing and able to improve already from day one, we have, where possible, presented results separately according to frequency of usage. (We found evidence of improvement at all levels of usage.) In effect, we used schools with one audit only and before receiving feedback as ‘control’ – this may in fact be better than using schools that do not use the tool at all, as those with one audit are more likely to be similar to other schools using the tool more often. We therefore believe that the comparison with these groups may actually be less subject to residual confounding. Whether similar improvements are seen in the other five domains of the tool – to give a fuller picture of changes in school meal quality – has not yet been evaluated but is planned.
Long-term evaluations always face the risk of confounding due to other external factors changing over time, likely to be a mixture of positive and negative, which are difficult to account for. For example, we know that challenges for the public meal sector today include replacing the many staff approaching retirement age and the increasing demands on quality, including requests from parents for special dietary requirements(Reference Grausne and Quetel25), changes to budgets and staff training, etc.
Conclusion
The improvement in nutritional quality of lunches offered in Swedish schools that was first seen two years after the introduction of legislation in 2011 appears to have continued in the subsequent six years. This positive result appears to be at least in part due to repeated use of the School Food Sweden tool. The more schools used the tool, the more likely the lunch menu was to meet nutritional criteria. Self-audit with automatic feedback appears effective in helping schools to improve school meal quality and an essential complement to legislation, or a promising alternative in settings where regulation is not an option.
Acknowledgements
Acknowledgements: Our thanks to (then) research assistants Christine Tell and Julia Åhlin and to statistician Zangin Zheebari, for their help with secondary data collection and/or preliminary analyses during the first few years. Financial support: Since 2013, the School Food Sweden tool has been hosted and maintained by the Centre for Epidemiology and Social Medicine at Region Stockholm, with additional financial support provided by the Swedish Association of Local Authorities and Regions. Funders had no role in the design of the study, the collection, analysis or interpretation of the data, or in writing the manuscript. Conflict of interest: ‘There are no conflicts of interest’. Authorship: E.P. and L.S.E. conceived of the study. E.P. administered the tool and managed the data collected, designed and performed the initial analysis and drafted and revised the manuscript. L.S.E. made substantial contributions to the manuscript. F.A. designed and performed additional statistical analysis and interpreted the results together with E.P. and contributed to the manuscript. All authors read and approved the final version. Ethics of human subject participation: Not applicable as the study does not involve data on humans.