Skip to main content

A psychometric approach to assessments of problematic use of online pornography and social networking sites based on the conceptualizations of internet gaming disorder

Abstract

Background

The problematic use of online gaming, social networking sites (SNS) and online pornography (OP) is an evolving problem. Contrary to the problematic use of SNS and OP, Internet gaming disorder (IGD) was included in the new edition of the Diagnostic and statistical manual of mental disorders (DSM-5) as a condition for further study. The present study adapted the criteria for IGD to the problematic use of SNS and OP by modifying a validated questionnaire for IGD (Internet Gaming Disorder Questionnaire: IGDQ) and investigating the psychometric properties of the modified versions, SNSDQ and OPDQ.

Methods

Two online samples (SNS: n = 700, 25.6 ± 8.4 years, 76.4% female; OP: n = 700, 32.9 ± 12.6 years, 76.7% male) completed the SNSDQ/OPDQ, the Brief Symptom Inventory (BSI) and the short Internet Addiction Test (sIAT) and provided information on their SNS/OP use. Standard item and reliability analyses, exploratory and confirmatory factor analyses and correlations with the sIAT were calculated. Problematic and non-problematic users were compared.

Results

The internal consistencies were ωordinal = 0.89 (SNS) and ωordinal = 0.88 (OP). The exploratory factor analyses extracted one factor for both questionnaires. Confirmatory factor analyses confirmed the results. The SNSDQ/OPDQ scores correlated highly with the sIAT scores and moderately with SNS/OP usage time. Of the users, 3.4% (SNS) and 7.1% (OP) lay above the cutoff for problematic use. Problematic users had higher sIAT scores, used the applications for longer and experienced more psychological distress.

Conclusion

Overall, the results of the study indicate that the adaption of the IGD criteria is a promising approach for measuring problematic SNS/OP use.

Peer Review reports

Background

In 2017, 3.5 billion people used the Internet [1]. Of the many ways of using it, online gaming, social networking sites (SNS) and online pornography (OP) are especially popular. All of these applications are under investigation, since their problematic use seems to be linked to psychological distress and problems with work, academic performance and interpersonal relationships [2,3,4,5,6,7]. With its inclusion in the appendix of the fifth edition of the Diagnostic and statistical manual of mental disorders (DSM-5), Internet gaming disorder (IGD) was recognized as a disorder warranting further investigation [8]. This was the first step towards defining standardized criteria for it. The 9 criteria are based on those for substance use disorders and gambling disorder and have to be fulfilled for the last 12 months: (1) preoccupation with gaming, (2) withdrawal when unable to game, (3) tolerance, (4) failure to stop/reduce the amount of gaming, (5) giving up other activities in favour of gaming, (6) continuing to play despite problems, (7) deceiving others about its amount, (8) gaming to escape adverse moods and (9) jeopardizing an important relationship, one’s occupation or one’s education because of gaming.

While IGD was included in the DSM-5 as a condition for further study, the problematic use of SNSs and OP was not. Petry and O’Brien (2013) [9] argue that there is a lack of empirical evidence and inconsistency in studies investigating these issues (SNS and OP). Nevertheless, there is an ongoing debate about the existence, classification and diagnosis of the problematic use of specific Internet applications like SNSs or OP [10] and a growing number of studies indicate the relevance of problematic use of SNS and OP [3, 5, 11, 12], not least due to their association with increased levels of psychological distress. This may even include symptoms of psychiatric disorders like depression, anxiety disorders, attention deficit and hyperactivity disorder or obsessive-compulsive disorder [2, 11, 13,14,15].

Assessment of problematic SNS and OP use

There are a number of different diagnostic instruments to assess a problematic use of SNS and OP. Most of them are either based on the diagnostic criteria for behavioural addictions (SNS: e.g. Bergen Social Media Addiction Scale [16] | OP: e.g. Problematic Pornography Consumption Scale [17]) or the Internet Addiction Test [18] (SNS: e.g. Addictive Tendencies Towards SNSs Scale [19] | OP: sIAT-sex [20]). Note, that this is by no means an exhaustive enumeration of all diagnostic instruments. For a detailed overview see Andreassen (2015) [2] for SNS and Wéry & Billieux (2017) [21] for OP. There is no shortage of well-validated instruments, but the following problems still remain: (i) different theoretical conceptualisations of problematic SNS and OP use with the consequence (ii) that no unified, standardized criteria are available to assess problematic use of the three most important specific online applications (Gaming, SNS, OP) in a comparative manner.

The most recent theoretical model for specific Internet-use disorders is the I-PACE model [22]. It is based on empirical findings and integrates previous theoretical considerations from other models in the field of behavioral addictions, like the Syndrome Model [23] or the Components Model of Addiction [24]. The I-PACE model hypothesizes that the aetiology of problematic use is similar for different Internet applications. Therefore, it suggests the application of uniform diagnostic criteria to all applications, thereby standardizing the diagnostic criteria and allowing comparisons of their prevalence rates. Since the American Psychiatric Association already proposed standardized criteria for IGD, it suggests itself to apply these criteria to the problematic use of other internet applications and there are several researchers who agree with this approach [25,26,27]. Some studies have already used this approach to develop psychometric tools to assess problematic internet use [26, 28, 29] However, to the best of the authors’ knowledge, there is only one study that used this approach for the problematic use of SNS [27] and none for the problematic use of OP.

Aim of the present study

Therefore the aim of this study was to examine to which extent the conceptualization of the Internet Gaming Disorder can be adapted to the problematic use of SNS and OP. Petry et al. (2014) [30] – who were members of the Substance Use Disorder Work group that recommend to include IGD in the DSM-5 - published a questionnaire (Internet Gaming Disorder Questionnaire: IGDQ) to assess IGD. For this study, we used the German version, which was validated by Jeromin, Barke and Rief (2016) [31] and adapted it for problematic SNS and OP use by rephrasing the items (for details see “Measures” section). In order to assess and evaluate to what degree the concept of the IGD can furnish a useful starting point for the assessment of problematic use of SNS and OP, we investigated the psychometric properties of the two modified versions, the SNSDQ and OPDQ.

Methods

Participants and procedure

The data were collected via an online survey (October 2017 – January 2018). The link to the questionnaire was posted to general (e.g. reddit) and application-specific Internet forums (e.g. facebook groups), SNS and mailing lists. At the outset, the participants specified whether they mainly use SNS or OP and were redirected to the corresponding questionnaire (SNS/OP). As an incentive, participants could win one of five gift vouchers for an online store (voucher value: €20). The inclusion criteria were: informed consent, age ≥ 18 years. Exclusion criteria were: no native speaker (German), percentage of online time spent using SNSs/OP ≤5%.

SNS subsample

A total of 939 participants fulfilled the inclusion criteria. Of these, 239 (25.45%) had to be excluded: 228 because they had missing data for the SNSDQ, 7 because they failed to provide serious information (e.g. Klingon as their native language) and 4 because they had an unrealistically fast answering time (2 SDs below the mean time). In the end, data from 700 participants were analysed (Table 1).

Table 1 Characteristics of the SNS and OP samples

OP subsample

A total of 1858 participants met the inclusion criteria. Of these, 669 (36.01%) had to be excluded: 630 because they had missing data for the OPDQ, 25 because they provided obviously false information, 9 because of an unrealistically fast answering time and 5 due to comments suggesting that they had failed to understand the survey. To increase the statistical comparability of the two subsamples (SNS/OP), a random sample of 700 participants was drawn from the remaining 1189. Finally, data from 700 participants were analysed (Table 1).

Measures

Socio-demographic information

Information regarding gender, age, education, employment and relationship status was collected.

Information regarding general and specific internet use

The participants reported how much time (hours) they spend online in a typical week. In addition, they provided specific information regarding their SNS or OP use, such as which SNS/OP sites they mostly use and how long they use SNSs or OP (hours/week).

Problematic use

The tendency of problematic SNS or OP use was assessed with the German versions of the SNSDQ and OPDQ. These questionnaires are modified versions of the IGDQ. The IGDQ consists of nine items, which reflect the corresponding DSM-5 criteria for IGD. It has a dichotomous response format consisting of ‘no’ (0) and ‘yes’ (1). The score is obtained by adding the responses (score range: 0–9). A score of ≥ 5 was defined as the cutoff for receiving a diagnosis of IGD [30]. For its adaptation regarding SNS and OP, the original items were rephrased by replacing all references to online gaming with references to SNS or OP. For example, ‘Do you feel restless, irritable, moody, angry, anxious or sad when attempting to cut down or stop using SNS or when you are unable to use SNS?’ instead of ‘Do you feel restless, irritable, moody, angry, anxious or sad when attempting to cut down or stop gaming or when you are unable to play?’

Short internet addiction test

The sIAT is a short version of the Internet Addiction Test and consists of 12 statements expressing possible symptoms of problematic Internet use (e.g. ‘How often do you find yourself saying “just a few more minutes” when online?’) [18]. For our study, we used the validated German version and rephrased the items for SNS and OP use (e.g. ‘How often do you try to cut down the amount of time you spend watching online pornography and fail?’) [32]. The participants have to rate the frequency with which they experienced each symptom in the last week on a 5-point scale ranging from 1 (‘never’) to 5 (‘very often’). In the resulting sum score (12–60 points), higher scores indicate more problematic use. The internal consistencies of the adapted scales in the present study were good (SNS: ω = 0.88 | OP: ω = 0.88).

Brief symptom inventory

The German version of the Brief Symptom Inventory (BSI) was used to identify clinically relevant symptoms of the participants [33, 34]. The BSI consists of 53 statements expressing symptoms of psychological distress (e.g. ‘In the last 7 days, how much were you distressed by feeling tense or keyed up?’). The items are answered on a 5-point scale ranging from 0 (‘not at all’) to 4 (‘extremely’). The total score ranges between 0 and 212, with higher scores indicating a higher level of distress. The internal consistency in the present samples was excellent, with ω = 0.96 (SNS) and ω = 0.96 (OP).

Data analysis

Statistical analyses were conducted using SPSS 24 (IBM SPSS Statistics), SPSS Amos, R version 3.5.1 [35] and FACTOR for the exploratory factor analysis (EFA) [36]. For the standard item analyses for each questionnaire, the SNSDQ and the OPDQ, item difficulties and item–total correlations were calculated. As a measure of reliability, coefficient omega or ordinal omega (in case of binominal data) were computed. These coefficients are recommended as a more accurate alternative to Cronbach’s alpha, especially when the assumption of tau-equivalence is violated [37,38,39,40]. Regarding validity, we investigated the factor structures by conducting EFAs and confirmatory factor analyses (CFA). For these, each sample (SNS and OP) was randomly divided into two subsamples (SNS1, SNS2 and OP1, OP2; each subsample: n = 350). The subsamples SNS1 and OP1 were used for the EFAs and SNS2 and OP2 for the CFAs. All other calculations are based on the total samples. To test whether the subsamples differed in key variables (age, SNSDQ/OPDQ score), independent t tests were performed. To ascertain the data’s suitability for EFA, the Kaiser–Meyer–Olkin test (KMO) and Bartlett’s test of sphericity were employed. Due to the dichotomous response format of the SNSDQ and the OPDQ, the EFAs followed Jeromin et al. (2016) [31] and used tetrachoric correlations as input and unweighted least squares as the estimation method [41]. The number of factors to be extracted was determined using Velicer’s MAP test [42].

A CFA was performed on SNS2 and OP2 to test the factor solution. The model parameters were estimated using maximum likelihood estimations. Due to the violation of the normality assumption Bollen-Stine Bootstrapping was applied [43]. To evaluate the model fit, the comparative fit index (CFI), root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMR) were calculated. According to Hu and Bentler (1999) [44], the cutoff criteria for an acceptable model fit are a CFI of > 0.95, an RMSEA between 0.06 and 0.08 and an SRMR of < 0.08.

Bivariate relationships between the SNSDQ and OPDG scores and the time spent using the Internet in general, the time spent using the preferred application (SNS/OP) and the sIAT scores were tested with Pearson correlations.

To give a first indication of diagnostic validity, we compared problematic users with non-problematic users. Analogously to the IGDQ, users with a score of ≥ 5 points were categorized as problematic users and all other users as non-problematic [30, 31]. Independent t tests (in the case of unequal variances: Welch’s tests) were computed to compare the groups regarding age, time spent using the Internet, time spent using their preferred application and sIAT and BSI scores. Due to the unequal group sizes, Hedges’ g is reported as a measure of effect size [45]. An effect of g = 0.20 is regarded as small, g = 0.50 as medium and g = 0.80 as large [45].

Results

SNS, OP and Internet use

SNS

The participants used the Internet on average for 20.9 ± 14.8 h/week and SNSs for 9.4 ± 10 h/week (44% of the total online time), with Facebook being the most popular SNS (n = 355; 50.7%), followed by Instagram (n = 196; 28%) and YouTube (n = 74; 10.6%). The mean SNSDQ and sIAT scores were 1.2 ± 1.5 and 23.6 ± 7.3 points. Overall, 24 participants (3.4%) had an SNSDQ score of ≥5 points and thus lay above the cut-off for problematic use (see Fig. 1 for details). The mean BSI total score across all the participants was 9.8 ± 16.7.

Fig. 1
figure 1

Percentage of participants fulfilling different numbers of criteria of the modified IGDQ (SNS and OP)

OP

The participants used the Internet on average for 21.9 ± 15.6 h/week and consumed OP for 3.9 ± 6.1 h/week (18.9% of the total online time). The most popular form of OP was videos (n = 351; 50.1%), followed by pictures (n = 275; 39.3%) and webcams (n = 71; 10.1%). The mean OPDG and sIAT scores were 1.5 ± 1.7 and 22.3 ± 7.9. A total of 50 participants (7.1%) achieved an OPDQ score above the cutoff of ≥ 5 points (see Fig. 1 for details). The mean BSI score across all the participants was 25.6 ± 27.6.

Item analysis and internal consistency

The results of the item analyses are presented in Tables 2 and 3.

Table 2 Results of the item analysis and exploratory factor analysis (SNS)
Table 3 Results of the item analysis and exploratory factor analysis (OP)

SNS

For the SNS version, item 7 had the lowest endorsement (number of affirmative answers (naa) = 21), while item 6 had the highest (naa = 247). This translates into an item difficulty of pi = 0.03 (item 7) and pi = 0.35 (item 6), with a mean difficulty across all items of pi = 0.13. The corrected item–total correlations ranged from ritc = 0.28 (item 3) to ritc = 0.39 (items 4, 5 and 6), with a mean of ritc = 0.36. The internal consistency was ωordinal = 0.89, and the scale would not have benefited from removing any item.

OP

In the OP version of the questionnaire, item 9 (naa = 24) had the lowest endorsement rate, whereas item 7 had the highest (naa = 286). The mean item difficulty was pi = .17, with item 9 being the most (pi = 0.03) and item 7 (pi = 0.41) the least difficult. The corrected item–total correlations ranged between ritc = 0.29 (item 7) and ritc = 0.47 (item 5), with a mean corrected item–total correlation of ritc = 0.38. The internal consistency was ωordinal = 0.88. Removing items would not have increased the internal consistency.

Factor structure

The subsamples (SNS1 vs. SNS2; OP1 vs. OP2) did not differ with regard to age, gender, Internet use, SNS/OP use, sIAT, SNSDQ/OPDQ and BSI scores (see Appendix).

SNS

Bartlett’s test of sphericity (Χ2 = 407.4, df = 36, p < 0.001) as well as the KMO criterion (0.74) indicated that the data were suitable for EFA. Velicer’s MAP test recommended the extraction of a single factor. This factor explained 52.74% of the total variance. The factor loadings ranged between 0.54 (item 3) and 0.78 (item 9) (Table 2). A CFA with the subsample SNS2 was calculated to test the one-factor solution. The fit indices were CFI = 0.81, RMSEA = 0.092 [CI = 0.075–0.111] and SRMR = 0.064 (for the path diagram, see Fig. 2).

Fig. 2
figure 2

Path diagram for the confirmatory factor analysis with subsample SNS2 (n = 350). All path coefficients are standardized and statistically significant (p < 0.001)

OP

Bartlett’s test of sphericity (Χ2 = 455.7, df = 36, p < 0.001) and the KMO criterion (0.80) indicated that the data were suitable for EFA, and the MAP test suggested a one-factor solution. The extracted factor explained 53.30% of the total variance. Items 3 and 7 had the lowest factor loadings (0.52), while item 9 had the highest (0.93) (Table 3). The one-factor solution was tested with a CFA (subsample: OP2). The model fit indices were CFI = 0.87, RMSEA = 0.080 [CI = 0.062–0.099] and SRMR = 0.057 (for the path diagram, see Fig. 3).

Fig. 3
figure 3

Path diagram for the confirmatory factor analysis with subsample OP2 (n = 350). All path coefficients are standardized and statistically significant (p < 0.001)

Correlations with SNS/OP/internet use and sIAT scores

SNS

The SNSDQ scores correlated with the SNS usage time (r = 0.32, p < 0.01), the weekly Internet use time (r = 0.16, p < 0.01) and the sIAT scores (r = 0.73, p < 0.01).

OP

The OPDQ scores correlated with the OP usage time (r = 0.22, p < 0.01) and very weakly with the Internet usage time per week (r = 0.08, p < 0.05). The highest correlation was found with the sIAT scores (r = 0.72, p < 0.01).

Comparison of persons with problematic and non-problematic SNS/OP use

SNS

Compared with unproblematic users, the problematic SNS users used SNS much more and had higher sIAT scores. They also seemed to experience more psychopathological distress, but, despite the effect size of the difference, this was merely a tendency (p = 0.13). For details see Table 4.

Table 4 Comparison of the participants with problematic and non-problematic use of SNS/OP

OP

Compared with unproblematic users, participants identified as problematic OP users spent more time on the Internet in general and more time using OP, had much higher sIAT scores and experienced more psychopathological distress (Table 4).

Discussion

In the present study, we adapted the German version of the IGDQ to the use of SNSs and OP and evaluated the psychometric properties of the modified versions in order to investigate to what extent the IGD criteria are suitable for assessing problematic use of SNS and OP.

Item analysis

The average endorsement of the items was low for both questionnaires, which is expected and desirable given that the checklists assess criteria of problematic use in a non-clinical sample. For SNS, the most endorsed item, item 6, concerns procrastination. This appears plausible, since SNS are often used to procrastinate [46, 47]. Item 7 (deceive/cover up) received the lowest endorsement, which also seems reasonable given that many people use SNS on a daily basis and in a socially accepted manner, making lying about it unnecessary [12]. For OP, item 7 (deceive/cover up) had the highest endorsement. This is possibly the case because the social acceptance of OP is rather low even if it is used casually and many people may feel embarrassed about it [48]. The lowest endorsement was for item 9, which seems reasonable, since it implies severe consequences (risk/loss of relationships/opportunities). The corrected item–total correlations were medium for both questionnaires and above the threshold of ritc = 0.30 [43]. The only exceptions were item 3 for SNS and item 7 for OP. Item 3 refers to tolerance, a criterion that is typical of substance abuse but seems to be harder to apply in the context of SNSs [49]. The low corrected item–total correlation for item 7 (OP) seems reasonable, since, as discussed, the use of OP may generally be associated with embarrassment, so deceiving others about one’s use does not discriminate well between problematic and unproblematic users.

Reliability

The SNSDQ and the OPDG showed good internal consistencies (SNS: ωordinal = 0.89; OP: ωordinal = 0.88). The results are comparable to other questionnaires measuring problematic SNS (e.g. Bergen Social Media Scale: α = 0.88) or OP use (e.g. sIAT-sex: α = 0.88) [16, 20].

Validity

In the course of the EFAs, a single factor was extracted for the SNS as well as the OP version of the questionnaire. This is in line with the result for the original IGDQ [31]. Item 3 had the lowest factor loading in both versions, probably because the tolerance criterion does not fit very well with the context of SNS and OP. Ultimately, the tolerance criterion originated with substance-based addictions. In that context, its meaning was much more clearly defined than with regard to the problematic use of OP, SNS or, indeed, online gaming, for which its usefulness is also discussed controversially (pro: [30, 50] | contra: [51, 52]). In the OP version, item 7 (deceive/cover up) also had a lower factor loading than the other items. This reflects the above argument regarding why the item is not so useful for differentiating between problematic and non-problematic users (37.4% of the non-problematic and 86% of the problematic users endorsed it). This indicates that the covering-up behaviour is not explicitly associated with problematic over-use measured by the OPDG but probably with social attitudes towards OP in general.

Overall, the results for the CFAs suggested that the one-factor solutions for both questionnaires are questionable and do not represent a good fit. While the SRMR was good for both models, the CFI and RMSEA were below and respectively above the cutoffs. As in the EFA, Item 6 for SNS and Item 7 for OP had particularly low factor loadings. This implies that their correlation with the respective overall scale is low and, accordingly, that their correlation with problematic usage behaviour is low. While this does not necessarily pose a problem, it is important that subsequent studies check whether these items should be revised, weighted differently or even removed.

Both questionnaires correlated strongly with the corresponding sIAT versions, indicating good convergent validity. The SNS version showed small to medium correlations with the general Internet usage and SNS usage time (per week). The OP version also showed a small correlation with the OP usage time (per week). The size of the correlations of problematic use with time spent using the respective application is in the range of those consistently reported [53,54,55].

To evaluate the diagnostic validity of the SNSDQ and OPDQ, we first compared observed prevalence rates with those found in other studies. For SNSs, 3.4% of the participants exceeded the cutoff, and, with regard to OP, 7.1% met the criteria for problematic use. Although comparing prevalence rates is difficult due to the multitude of different diagnostic instruments, the rates found here are comparable to some in the existing literature. In their study of a national representative sample of Hungarian adolescents, Bányai et al. (2017) [3] found a prevalence rate of 4.5% for problematic SNS use. Regarding the problematic use of OP, Giordano and Cashwell (2017) [55] reported a prevalence rate of 10.3% in a sample of American college students and Ross and colleagues (2012) [15] found a rate of 7.6% in a sample of Swedish adults.

It is important to note that no diagnosis can be made using these instruments. Firstly, neither the DSM-5 nor the ICD-11 contain any diagnoses for the problematic use of OP or SNS. Secondly, even if they did, a clinical interview by an expert would be necessary to verify the presence of clinically significant distress and functional impairment and the absence of any the exclusion criteria for the individual case, which are a requirement for a psychiatric diagnosis. Such an independent clinical judgement was not collected in the present study, so we cannot determine whether persons above the cutoff would warrant any diagnosis. However, we would consider them as possible candidates for such a diagnosis. To further investigate the diagnostic validity, we compared the users above and below the cutoff and found marked differences. Problematic users spent more time online per week (only for OP) and used their preferred application for longer. Although an increased usage time is not a sufficient criterion to infer a problematic use, several studies have found an - albeit weak - correlation between usage time and problematic use [53,54,55]. In addition, problematic users had much higher sIAT scores and seemed to experience a higher level of psychological distress (only for OP). Overall, these results – particularly the very large difference between the BSI total scores in the case of the problematic OP users – may be regarded as first indicators of the criterion validity of the instruments and suggest that the IGD criteria might be suitable to identify individuals with a problematic use of SNS or OP [56].

Limitations

The study has to be considered in the light of its limitations. One limitation is that only adult participants were tested, although SNS particularly are also frequently used by adolescents [3]. A further limitation is that not all participants answered all questionnaires regarding problematic use (SNS, OP and IGD). This would have allowed a more detailed investigation of the overlapping between the problematic use of the respective applications. Moreover, only self-reported data were collected, which are prone to bias effects, like social desirability or common method variance. In addition, they did not include a clinical judgement. Considering that the aim of the self-report checklists is to identify problematic users, further studies should investigate their validity with samples of persons who are judged by clinicians to show problematic use in a clinically relevant sense. Furthermore, it is important to note that neither the criteria for a diagnosis, nor the number of items or any cut-off have been agreed. We do not intend to propose any arguments as to whether these behavioural patterns would warrant the status of a “disorder”. We rather aim to promote research into the identification of the problematic use of SNS and OP by providing a common instrument that may help with a comparative assessment and suggest using this instrument as a common starting point for such investigations, amending them as further research suggests this.

Conclusion

As some psychometric parameters of the tested questionnaires are not satisfactory, it seems that the IGD criteria cannot simply be transferred to the problematic use of SNS/OP. Nevertheless, our overall results indicate that this is a promising starting point and support the viability of using adapted IGD criteria as a framework to assess problematic SNS/OP use. This study contributes to the research regarding measuring aspects of problematic SNS and OP use and might be a first step towards a standardized assessment and contribute to investigations of these emerging constructs. Future research should further investigate the usefulness of the DSM-5 criteria for IGD in the context of SNS/OP use.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

BSI:

Brief Symptom Inventory

CFA:

Confirmatory Factor Analysis

CFI:

Comparative Fit Index

CI:

Confidence Interval

DSM-5:

Diagnostic and statistical manual of mental disorders

EFA:

Exploratory Factor Analysis

IGD:

Internet gaming disorder (IGD)

KMO:

Kaiser–Meyer–Olkin

NAA:

Number of affirmative answers

OP:

Online Pornography

OPDQ:

Online Pornography Disorder Questionnaire

RMSEA:

Root mean square error of approximation

sIAT:

Short Internet Addiction Test

SNS:

Social Networking Sites

SNSDQ:

Social Networking Sites Disorder Questionnaire

SRMR:

Standardized root mean square residual

References

  1. ITU. Number of internet users worldwide from 2005 to 2017. 2018. https://www.statista.com/statistics/273018/number-of-internet-users-worldwide/. Accessed 26 Jun 2018.

    Google Scholar 

  2. Andreassen CS. Online social network site addiction: a comprehensive review. Curr Addict Rep. 2015;2:175–84. https://doi.org/10.1007/s40429-015-0056-9.

    Article  Google Scholar 

  3. Bányai F, Zsila Á, Király O, Maraz A, Elekes Z, Griffiths MD, et al. Problematic social media use: results from a large-scale nationally representative adolescent sample. PLoS One. 2017;12:e0169839. https://doi.org/10.1371/journal.pone.0169839.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Griffiths MD. Internet sex addiction: a review of empirical research. Addict Res Theory. 2011;20:111–24. https://doi.org/10.3109/16066359.2011.588351.

    Article  Google Scholar 

  5. Grubbs JB, Volk F, Exline JJ, Pargament KI. Internet pornography use: perceived addiction, psychological distress, and the validation of a brief measure. J Sex Marital Ther. 2015;41:83–106. https://doi.org/10.1080/0092623X.2013.842192.

    Article  PubMed  Google Scholar 

  6. Kuss DJ, Griffiths MD. Internet gaming addiction: a systematic review of empirical research. Int J Ment Health Addiction. 2012;10:278–96. https://doi.org/10.1007/s11469-011-9318-5.

    Article  Google Scholar 

  7. Pontes HM, Griffiths MD. Portuguese validation of the internet gaming disorder scale-short-form. Cyberpsychol Behav Soc Netw. 2016;19:288–93. https://doi.org/10.1089/cyber.2015.0605.

    Article  PubMed  Google Scholar 

  8. Association AP. Diagnostic and statistical manual of mental disorders: American Psychiatric Association; 2013.

    Book  Google Scholar 

  9. Petry NM, O'Brien CP. Internet gaming disorder and the DSM-5. Addiction. 2013;108:1186–7. https://doi.org/10.1111/add.12162.

    Article  PubMed  Google Scholar 

  10. Billieux J, Schimmenti A, Khazaal Y, Maurage P, Heeren A. Are we overpathologizing everyday life? A tenable blueprint for behavioral addiction research. J Behav Addict. 2015;4:119–23. https://doi.org/10.1556/2006.4.2015.009.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Alarcón R de, La Iglesia JI de, Casado NM, Montejo AL. Online porn addiction: what we know and what we don’t-a systematic review. J Clin Med 2019. doi:https://doi.org/10.3390/jcm8010091.

  12. Kuss DJ, Griffiths MD. Social networking sites and addiction: ten lessons learned. Int J Environ Res Public Health. 2017. https://doi.org/10.3390/ijerph14030311.

  13. Andreassen SC, Billieux J, Griffiths MD, Kuss DJ, Demetrovics Z, Mazzoni E, Pallesen S. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: a large-scale cross-sectional study. Psychol Addict Behav. 2016;30:252–62. https://doi.org/10.1037/adb0000160.

    Article  Google Scholar 

  14. Hussain Z, Griffiths MD. Problematic social networking site use and comorbid psychiatric disorders: a systematic review of recent large-scale studies. Front Psychiatry. 2018;9:686. https://doi.org/10.3389/fpsyt.2018.00686.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Ross MW, Månsson S-A, Daneback K. Prevalence, severity, and correlates of problematic sexual internet use in Swedish men and women. Arch Sex Behav. 2012;41:459–66. https://doi.org/10.1007/s10508-011-9762-0.

    Article  PubMed  Google Scholar 

  16. Andreassen CS, Pallesen S, Griffiths MD. The relationship between addictive use of social media, narcissism, and self-esteem: findings from a large national survey. Addict Behav. 2017;64:287–93. https://doi.org/10.1016/j.addbeh.2016.03.006.

    Article  PubMed  Google Scholar 

  17. Bőthe B, Tóth-Király I, Zsila Á, Griffiths MD, Demetrovics Z, Orosz G. The development of the problematic pornography consumption scale (PPCS). J Sex Res. 2018;55:395–406. https://doi.org/10.1080/00224499.2017.1291798.

    Article  PubMed  Google Scholar 

  18. Young KS. Internet addiction: the emergence of a new clinical disorder. Cyber Psychology Behavior. 1998;1:237–44. https://doi.org/10.1089/cpb.1998.1.237.

    Article  Google Scholar 

  19. Wu AMS, Cheung VI, Ku L, Hung EPW. Psychological risk factors of addiction to social networking sites among Chinese smartphone users. J Behav Addict. 2013;2:160–6. https://doi.org/10.1556/JBA.2.2013.006.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Wéry A, Burnay J, Karila L, Billieux J. The short French internet addiction test adapted to online sexual activities: validation and links with online sexual preferences and addiction symptoms. J Sex Res. 2016;53:701–10. https://doi.org/10.1080/00224499.2015.1051213.

    Article  PubMed  Google Scholar 

  21. Wéry A, Billieux J. Problematic cybersex: conceptualization, assessment, and treatment. Addict Behav. 2017;64:238–46. https://doi.org/10.1016/j.addbeh.2015.11.007.

    Article  PubMed  Google Scholar 

  22. Brand M, Young KS, Laier C, Wölfling K, Potenza MN. Integrating psychological and neurobiological considerations regarding the development and maintenance of specific internet-use disorders: an interaction of person-affect-cognition-execution (I-PACE) model. Neurosci Biobehav Rev. 2016;71:252–66. https://doi.org/10.1016/j.neubiorev.2016.08.033.

    Article  PubMed  Google Scholar 

  23. Shaffer HJ, LaPlante DA, LaBrie RA, Kidman RC, Donato AN, Stanton MV. Toward a syndrome model of addiction: multiple expressions, common etiology. Harv Rev Psychiatry. 2004;12:367–74. https://doi.org/10.1080/10673220490905705.

    Article  PubMed  Google Scholar 

  24. Griffiths M. A ‘components’ model of addiction within a biopsychosocial framework. J Subst Abus. 2009;10:191–7. https://doi.org/10.1080/14659890500114359.

    Article  Google Scholar 

  25. Rumpf H-J, Bischof G, Bischof A, Besser B, Meyer C, John U. Applying DSM-5 criteria for internet gaming disorder for the broader concept of internet addiction. J Behav Addict. 2015;4:34–6.

    Google Scholar 

  26. Pontes HM, Griffiths MD. The development and psychometric evaluation of the internet disorder scale (IDS-15). Addict Behav. 2017;64:261–8. https://doi.org/10.1016/j.addbeh.2015.09.003.

    Article  PubMed  Google Scholar 

  27. van den Eijnden RJJM, Lemmens JS, Valkenburg PM. The social media disorder scale. Comput Hum Behav. 2016;61:478–87. https://doi.org/10.1016/j.chb.2016.03.038.

    Article  Google Scholar 

  28. Cho H, Kwon M, Choi J-H, Lee S-K, Choi JS, Choi S-W, Kim D-J. Development of the internet addiction scale based on the internet gaming disorder criteria suggested in DSM-5. Addict Behav. 2014;39:1361–6. https://doi.org/10.1016/j.addbeh.2014.01.020.

    Article  PubMed  Google Scholar 

  29. Pontes HM, Griffiths MD. The development and psychometric properties of the internet disorder scale–short form (IDS9-SF). addicta; 2017. https://doi.org/10.15805/addicta.2016.3.0102.

    Book  Google Scholar 

  30. Petry NM, Rehbein F, Gentile DA, Lemmens JS, Rumpf H-J, Mößle T, et al. An international consensus for assessing internet gaming disorder using the new DSM-5 approach. Addiction. 2014;109:1399–406. https://doi.org/10.1111/add.12457.

    Article  PubMed  Google Scholar 

  31. Jeromin F, Rief W, Barke A. Validation of the internet gaming disorder questionnaire in a sample of adult German-speaking internet gamers. Cyberpsychol Behav Soc Netw. 2016;19:453–9. https://doi.org/10.1089/cyber.2016.0168.

    Article  PubMed  Google Scholar 

  32. Pawlikowski M, Altstötter-Gleich C, Brand M. Validation and psychometric properties of a short version of Young’s internet addiction test. Comput Hum Behav. 2013;29:1212–23. https://doi.org/10.1016/j.chb.2012.10.014.

    Article  Google Scholar 

  33. Derogatis LR. The brief symptom inventory (BSI): administration, scoring & procedures manual-II: clinical psychometric research; 1992.

    Google Scholar 

  34. Franke G. BSI. Brief symptom inventory - deutsche version. Manual. Göttingen: Beltz; 2000.

    Google Scholar 

  35. Core Team R. R: a language and environment for statistical computing. In: R Foundation for statistical computing. Vienna, Austria; 2018. Available online at https://www.R-project.org/.

  36. Lorenzo-Seva U, Ferrando PJ. FACTOR 9.2. Applied Psychological Measurement. 2013;37:497–8. doi:https://doi.org/10.1177/0146621613487794.

  37. Gadermann AM, Guhn M, Zumbo B. Estimating ordinal reliability for Likert-type and ordinal item response data: a conceptual, empirical, and practical guide. Pract Assess Res Eval. 2012;17:3.

    Google Scholar 

  38. Peters G-J. The alpha and the omega of scale reliability and validity: why and how to abandon Cronbach’s alpha and the route towards more comprehensive assessment of scale quality: Open Science framework; 2014.

    Google Scholar 

  39. Trizano-Hermosilla I, Alvarado JM. Best alternatives to Cronbach's alpha reliability in realistic conditions: congeneric and asymmetrical measurements. Front Psychol. 2016;7:769. https://doi.org/10.3389/fpsyg.2016.00769.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Zumbo BD, Gadermann AM, Zeisser C. Ordinal Versions of Coefficients Alpha and Theta for Likert Rating Scales. J Mod App Stat Meth. 2007;6:21–9. https://doi.org/10.22237/jmasm/1177992180.

    Article  Google Scholar 

  41. Holgado-Tello FP, Chacón-Moscoso S, Barbero-García I, Vila-Abad E. Polychoric versus Pearson correlations in exploratory and confirmatory factor analysis of ordinal variables. Qual Quant. 2010;44:153–66. https://doi.org/10.1007/s11135-008-9190-y.

    Article  Google Scholar 

  42. Velicer WF. Determining the number of components from the matrix of partial correlations. Psychometrika. 1976;41:321–7. https://doi.org/10.1007/BF02293557.

    Article  Google Scholar 

  43. Bühner M. Einführung in die Test- und Fragebogenkonstruktion. 2nd ed. München, Don Mills: Pearson Studium; 2006.

    Google Scholar 

  44. Hu L-t, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model Multidiscip J. 1999;6:1–55. https://doi.org/10.1080/10705519909540118.

    Article  Google Scholar 

  45. Ellis PD. The essential guide to effect sizes: statistical power, meta-analysis, and the interpretation of research results: Cambridge University press; 2010.

    Book  Google Scholar 

  46. Meier A, Reinecke L, Meltzer CE. “Facebocrastination”?: predictors of using Facebook for procrastination and its effects on students’ well-being. Comput Hum Behav. 2016;64:65–76. https://doi.org/10.1016/j.chb.2016.06.011.

    Article  Google Scholar 

  47. Ryan T, Chester A, Reece J, Xenos S. The uses and abuses of Facebook: a review of Facebook addiction. J Behav Addict. 2014;3:133–48. https://doi.org/10.1556/JBA.3.2014.016.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Twohig MP, Crosby JM, Cox JM. Viewing internet pornography: for whom is it problematic, how, and why? Sex Addict Compuls. 2009;16:253–66. https://doi.org/10.1080/10720160903300788.

    Article  Google Scholar 

  49. Kardefelt-Winther D, Heeren A, Schimmenti A, van Rooij A, Maurage P, Carras M, et al. How can we conceptualize behavioural addiction without pathologizing common behaviours? Addiction. 2017;112:1709–15. https://doi.org/10.1111/add.13763.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Rehbein F, Kliem S, Baier D, Mößle T, Petry NM. Prevalence of internet gaming disorder in German adolescents: diagnostic contribution of the nine DSM-5 criteria in a state-wide representative sample. Addiction. 2015;110:842–51. https://doi.org/10.1111/add.12849.

    Article  PubMed  Google Scholar 

  51. Kardefelt-Winther D. A conceptual and methodological critique of internet addiction research: towards a model of compensatory internet use. Comput Hum Behav. 2014;31:351–4. https://doi.org/10.1016/j.chb.2013.10.059.

    Article  Google Scholar 

  52. Starcevic V. Tolerance and withdrawal symptoms may not be helpful to enhance understanding of behavioural addictions. Addiction. 2016;111:1307–8. https://doi.org/10.1111/add.13381.

    Article  PubMed  Google Scholar 

  53. Holmgren HG, Coyne SM. Can’t stop scrolling!: pathological use of social networking sites in emerging adulthood. Addict Res Theory. 2017;25:375–82. https://doi.org/10.1080/16066359.2017.1294164.

    Article  Google Scholar 

  54. Yu S, Wu AMS, Pesigan IJA. Cognitive and psychosocial health risk factors of social networking addiction. Int J Ment Health Addiction. 2016;14:550–64. https://doi.org/10.1007/s11469-015-9612-8.

    Article  Google Scholar 

  55. Giordano AL, Cashwell CS. Cybersex addiction among college students: a prevalence study. Sex Addict Compuls. 2017;24:47–57. https://doi.org/10.1080/10720162.2017.1287612.

    Article  Google Scholar 

  56. Sawilowsky SS. New Effect Size Rules of Thumb. J Mod App Stat Meth. 2009;8:597–9. https://doi.org/10.22237/jmasm/1257035100.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank Nina Syring and Saskia Schraub for their help with the data collection. The publication of this work was supported by the German Research Foundation (DFG) within the funding programme Open Access Publishing.

Funding

This study received no funding and was conducted as an independent study.

Author information

Authors and Affiliations

Authors

Contributions

AB and MM conceived the study, MM and ST collected the data. MM conducted the statistical analysis. MM wrote the first draft of the manuscript. AB provided manuscript editing and consultation on the data analysis. All authors contributed to and have approved the final manuscript.

Corresponding author

Correspondence to Manuel Mennig.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was obtained from the Ethics Committee of the Department of Psychology of the Philipps University of Marburg, Germany. Data were collected anonymously, and the participants provided informed consent by clicking on a consent button before they were able to proceed to the online survey.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Table 5 Comparison of the subsamples with regard to age, gender, Internet use, SNS/OP use, sIAT, SNSDQ/OPDQ and BSI scores
Table 6 Comparison of the gender distribution in the subsamples

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mennig, M., Tennie, S. & Barke, A. A psychometric approach to assessments of problematic use of online pornography and social networking sites based on the conceptualizations of internet gaming disorder. BMC Psychiatry 20, 318 (2020). https://doi.org/10.1186/s12888-020-02702-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12888-020-02702-0

Keywords