This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on http://mental.jmir.org/, as well as this copyright and license information must be included.
Distorted perception of one’s body and appearance, in general, is a core feature of several psychiatric disorders including anorexia nervosa and body dysmorphic disorder and is operative to varying degrees in nonclinical populations. Yet, body image perception is challenging to assess, given its subjective nature and variety of manifestations. The currently available methods have several limitations including restricted ability to assess perceptions of specific body areas. To address these limitations, we created Somatomap, a mobile tool that enables individuals to visually represent their perception of body-part sizes and shapes as well as areas of body concerns and record the emotional valence of concerns.
This study aimed to develop and pilot test the feasibility of a novel mobile tool for assessing 2D and 3D body image perception.
We developed a mobile 2D tool consisting of a manikin figure on which participants outline areas of body concern and indicate the nature, intensity, and emotional valence of the concern. We also developed a mobile 3D tool consisting of an avatar on which participants select individual body parts and use sliders to manipulate their size and shape. The tool was pilot tested on 103 women: 65 professional fashion models, a group disproportionately exposed to their own visual appearance, and 38 nonmodels from the general population. Acceptability was assessed via a usability rating scale. To identify areas of body concern in 2D, topographical body maps were created by combining assessments across individuals. Statistical body maps of group differences in body concern were subsequently calculated using the formula for proportional z-score. To identify areas of body concern in 3D, participants’ subjective estimates from the 3D avatar were compared to corresponding measurements of their actual body parts. Discrepancy scores were calculated based on the difference between the perceived and actual body parts and evaluated using multivariate analysis of covariance.
Statistical body maps revealed different areas of body concern between models (more frequently about thighs and buttocks) and nonmodels (more frequently about abdomen/waist). Models were more accurate at estimating their overall body size, whereas nonmodels tended to underestimate the size of individual body parts, showing greater discrepancy scores for bust, biceps, waist, hips, and calves but not shoulders and thighs. Models and nonmodels reported high ease-of-use scores (8.4/10 and 8.5/10, respectively), and the resulting 3D avatar closely resembled their actual body (72.7% and 75.2%, respectively).
These pilot results suggest that Somatomap is feasible to use and offers new opportunities for assessment of body image perception in mobile settings. Although further testing is needed to determine the applicability of this approach to other populations, Somatomap provides unique insight into how humans perceive and represent the visual characteristics of their body.
Accurately perceiving the overall state of the body is a key sensory task necessary for health maintenance in humans [
Body dissatisfaction, defined as unhappiness with self-perceived flaws in body features, is an especially common issue for women [
Disturbances of body perception often occur in individuals with psychiatric disorders. For instance, individuals with anorexia nervosa tend to overestimate characteristics of certain body areas relative to healthy comparisons [
It is less clear if and how nonclinical populations differ in body image perceptions. Discordant body perceptions (eg, body dissatisfaction or seeing one’s self as “fat” when slim) have been theorized to be strengthened and intensified for some women by social media and media image exposure [
Most current body perception assessments rely on language-based methods such as verbal interviews and questionnaires. Verbal interviews typically involve an in-person discussion with a clinician or researcher, which is time intensive, requires specialized training, and may lack the degree of specificity needed for capturing an accurate snapshot of body-related perceptions or concerns. For example, it can be challenging to describe in words exactly how large one perceives a particular area of their body to be. Questionnaire-based scales measuring body image perceptions typically assess attitudes about the body, both negative [
To address existing gaps in the ability to accurately assess body perceptions, we developed Somatomap, a novel mobile tool intended to quantitatively and qualitatively assess different aspects of body image perception in 2D (ie, mapping body concern, types of concern, and emotions associated with concern) and 3D (ie, measuring the degree of disturbance of body image perception for body part sizes and shapes). In this manuscript, we describe the development of this tool for assessing body image perception and results of pilot feasibility and usability testing in female fashion models and in a general population reference sample. Given the greater attention and feedback applied to their own visual body characteristics as a function of their occupation, we hypothesized that fashion models might (1) perceive concerns with areas of the body that distinctly differ from nonmodels, and (2) that they would be more accurate in estimating the size of their body parts and overall body size. Finally, we predicted that the Somatomap tool would be sensitive to detecting both kinds of differences.
We developed Somatomap as a Web-based self-assessment tool for measuring body image perception in 2D and 3D. The 2D assessment displays a picture of an androgynous manikin; the user is asked to imagine this manikin as their own body and draw directly upon it to outline an area where they perceive a body concern (
Somatomap was built on Chorus, a HIPAA (Health Insurance Portability and Accountability Act)-compliant visual development platform for creating mobile Web, text-messaging, and interactive voice apps [
Somatomap 2D. Step-by-step screenshots of avatars and a subsample of possible body concerns and emotion ratings that can be endorsed for the 2D assessment. Participants first indicate one area of body concern by outlining it on the avatar (top left and top right), with the ability to zoom in by double tapping the figure to indicate body concerns for smaller areas or areas with more detail (top right). They are then asked to select the type of concerns pertaining to the body area (bottom left shows a subsample with several concerns selected; users can also enter a unique concern if theirs is not listed). Finally, they are asked to choose the feelings pertaining to the area of body concern (bottom right shows a subsample) or enter their own feelings. Participants then repeat this process for each body concern. The top right depicts three different examples of body concern outlines.
Somatomap 3D. Step-by-step screenshots of avatars for the 3D assessment. Bottom: 3D avatar shown at the start of the assessment. Participants were instructed as follows: “Please use the sliders at the left to create what your body looks like today.” Participants could rotate the avatar to view it from multiple angles as they manipulated the sliders (screenshots show examples of different orientations). Only a single avatar is visible at any given time. Top: Example of a final avatar after manipulating the sliders (shown from multiple angles matching the original avatar).
We recruited a sample of 65 female fashion models (age=23.4 [SD 5.5] years) from professional modeling agencies in the United Kingdom. Models were initially recruited telephonically and asked to visit their agency; all who were contacted came in. We also recruited a sample of nonmodels (n=38; age=25.4 [SD5.2] years) from the general UK population through flyers and social media. Neither group was informed about the study hypotheses in advance of the study, and none declined to participate after arriving for the consenting procedure and evaluation in either group.
The study was approved by the School of Psychology Ethics Review Board at the University of Nottingham. Testing sessions occurred for fashion models at their modeling agencies and for nonmodels at the University of Nottingham. Prior to the experiment, each participant provided written informed consent. Participants were seated at a laptop computer to complete demographic questions adapted from the PhenX toolkit [
In Somatomap 2D, participants were asked to outline a specific area of body concern on a 2D human manikin using a laptop trackpad (13-inch MacBook Air, Apple Inc). Once the outline is drawn, the interior automatically fills in, resulting in an “area of concern.” This procedure gave participants maximum flexibility to trace any body region they chose, with pixel-level specificity. They then entered details about their concern by selecting each type of concern and the emotions surrounding the concern and used a slider to indicate the magnitude of the body concern. If they had more than one body concern, they repeated this procedure for each individual area of concern.
In Somatomap 3D, participants could rotate a 3D human avatar in multiple directions and adjust body areas independent from one another. Participants were instructed as follows: “Please use the sliders at the left to create what your body looks like today.” The 3D usability assessment was an online questionnaire asking about their experience of using the app. Questions asked how difficult/easy and frustrating/enjoyable the tool was to use and assessed the degree of identification with the original avatar (before moving the sliders) and the final avatar (after completing moving the sliders).
After completing all body image perception ratings, each participant’s shoulders, bust, biceps, waist, hips, thighs, and calves were measured with a tape measure following a standardized protocol adapted from the PhenX toolkit [
Proportional maps of body concern for each group were generated from Somatomap 2D tracings by collapsing across all areas of body concerns. This approach to proportionally display body concerns is similar to our previously published studies involving body maps of cardiac sensation [
Perceived body measurement values were converted from arbitrary units to centimeter units via piecewise linear interpolation, using the actual body part sizes of the initial female volunteer who was scanned to create the 3D avatar. Body parts were measured using an in-engine ruler for three situations: when the slider was set to 0.5, when it was set to 0, and when it was set to 1. Separate linear interpolations for values between 0 and 0.5 and for values between 0.5 and 1 were computed. Premeasured values for the 0.5 setting allowed for calculations of the appropriate scale factor by multiplying by the amount of relative change for each part computed earlier. For example, “0” on the slider might actually mean the foot is 75% of its size when the slider is at “0.5,” and “1” on the slider might mean the foot is 130% of its size when the slider is at “0.5.” Such measurements and calculations were performed independently for each model and their constituent body parts. Discrepancy scores (in centimeters) were then calculated by subtracting the actual body measurement from the perceived body measurement for each of the seven body areas physically measured.
A multivariate analysis of covariance was used to determine if there were group differences in the actual body measurements, the 3D body measurements, and the discrepancy scores. Covariates included BMI, height, and weight. If the multivariate analysis of covariance results were significant, post hoc analysis using analysis of covariance was used to determine which specific variables showed differences between models and nonmodels.
Key demographic data are included in
Demographic characteristics of female fashion models (n=65) and nonmodels (n=38) analyzed by t test.
Characteristics | Model, mean (SD) | Nonmodel, mean (SD) | ||
Age (years) | 25.4 (5.2) | 23.4 (5.5) | 1.7 (80.9) | .09 |
Height (cm) | 175.9 (5.1) | 162.5 (6.3) | –11.2 (65.3) | <.001 |
Weight (kg) | 57.5 (4.4) | 56.9 (8.4) | –0.4 (48.7) | .69 |
Body mass index (kg/m2) | 18.6 (1.2) | 21.3 (2.8) | 5.7 (44.2) | <.001 |
Demographic characteristics of female fashion models (n=65) and nonmodels (n=38).
Characteristics | Model, n (%) | Nonmodel, n (%) | |
|
|||
|
Caucasian | 44 (67.7) | 21 (55.3) |
|
Asian (including East Indian) | 5 (7.7) | 7 (18.4) |
|
Black | 3 (4.6) | 3 (7.9) |
|
Hispanic/Latino | 1 (1.5) | 2 (5.3) |
|
Mixed race | 12 (18.5) | 5 (13.1) |
|
|||
|
Graduate school | 2 (3.1) | 7 (18.4) |
|
University graduate | 12 (18.5) | 25 (65.8) |
|
Some university | 6 (9.2) | 3 (7.9) |
|
High school/A level/GEDa | 32 (49.2) | 3 (7.9) |
|
Some high school/A level/GED | 9 (13.8) | 0 (0) |
|
Less than high school/A level/GED | 4 (6.2) | 0 (0) |
aGED: general educational development.
Proportional body maps showed that models perceived body concerns in similar as well as distinct areas compared with nonmodels (
Proportional maps of body image concerns and associated emotions in female fashion models (left) and nonmodels (right).
Frequency and percentage of individual participants endorsing each affective rating per group (models: n=65, nonmodels: n=38).
Affective type | Models endorsing affective rating, n (%) | Nonmodels endorsing affective rating, n (%) | ||
|
||||
|
Frustrated | 19 (29.2) | 7 (18.4) | |
|
Anxious, tense, worried, nervous | 18 (27.7) | 3 (7.9) | |
|
Other (eg, defeated, annoyed, self-conscious, exhausted, not enough, silly, don’t like) | 14 (21.5) | 9 (23.7) | |
|
Ashamed | 10 (15.4) | 9 (23.7) | |
|
Hopeless | 5 (7.7) | 4 (10.5) | |
|
Sad | 4 (6.2) | 6 (15.8) | |
|
Disgusted | 4 (6.2) | 4 (10.5) | |
|
Defective | 3 (4.6) | 3 (7.9) | |
|
Depressed | 2 (3.1) | 5 (13.2) | |
|
Fearful | 2 (3.1) | 1 (2.6) | |
|
Angry | 1 (1.5) | 2 (5.3) | |
|
Overwhelmed | 1 (1.5) | 0 (0) | |
|
Lonely | 1 (1.5) | 0 (0) | |
|
Numb/unreal/dead | 0 (0) | 1 (2.6) | |
|
Embarrassed | 0 (0) | 1 (2.6) | |
|
||||
|
Looks ok/fine | 18 (27.7) | 13 (34.2) | |
|
Hopeful | 8 (12.3) | 3 (7.9) | |
|
Satisfied/content | 2 (2.1) | 4 (10.5) |
Statistical body map evaluating differences in body image concerns between female fashion models (in warm colors) and nonmodels (in cool colors; statistical threshold:
A summary of the actual and perceived body area sizes and discrepancies (perceived measure minus actual measurements) is listed in
Actual and perceived body measurements in female fashion models and nonmodels.
Variable | Nonmodels, mean (SD) | Models, mean (SD) | Partial η2 | Cohen f | Wilks Λb | |||||||
|
50.33 (7,91) | 0.205 | <.001 | |||||||||
|
Shoulder | 32.11 (2.22) | 35.18 (2.17) | .10 | 0.038 | 0.198 |
|
|
|
|||
|
Bust | 85.76 (6.01) | 80.58 (3.99) | .20 | 0.021 | 0.144 |
|
|
|
|||
|
Bicep | 28.61 (6.18) | 22.37 (2.30) | <.001 | 0.223 | 0.537 |
|
|
|
|||
|
Waist | 74.11 (6.45) | 64.68 (4.94) | <.001 | 0.244 | 0.568 |
|
|
|
|||
|
Hip | 96.11 (7.09) | 89.03 (4.65) | <.001 | 0.101 | 0.336 |
|
|
|
|||
|
Thigh girth | 46.53 (7.09) | 44.58 (3.02) | <.001 | 0.094 | 0.322 |
|
|
|
|||
|
Calf girth | 42.08 (6.09) | 32.51 (2.75) | <.001 | 0.450 | 0.904 |
|
|
|
|||
|
Scaled body average | 0.98 (0.07) | 0.99 (0.05) | .98 | 0.000004 | 0.002 |
|
|
|
|||
|
4.85 (7,91) | 0.382 | <.001 | |||||||||
|
Shoulder | 31.55 (1.91) | 31.38 (2.05) | .94 | 0.00006 | 0.008 |
|
|
|
|||
|
Bust | 79.18 (8.61) | 77.32 (8.42) | .09 | 0.040 | 0.205 |
|
|
|
|||
|
Bicep | 24.24 (2.80) | 22.37 (2.20) | .21 | 0.019 | 0.141 |
|
|
|
|||
|
Waist | 62.11 (2.77) | 60.31 (2.11) | .53 | 0.005 | 0.072 |
|
|
|
|||
|
Hip | 90.97 (5.92) | 89.54 (5.73) | .20 | 0.021 | 0.147 |
|
|
|
|||
|
Thigh girth | 34.45 (4.60) | 31.55 (2.73) | .18 | 0.027 | 0.166 |
|
|
|
|||
|
Calf girth | 31.55 (4.12) | 29.60 (2.95) | .87 | 0.0005 | 0.021 |
|
|
|
|||
|
Scaled body average | 0.98 (0.06) | 0.99 (0.04) | <.001 | 0.145 | 0.413 |
|
|
|
|||
|
21.03 (7,91) | 0.205 | <.001 | |||||||||
|
Shoulder | –0.59 (2.58) | –3.83 (3.05) | .19 | 0.023 | 0.155 |
|
|
|
|||
|
Bust | –6.54 (8.12) | –3.23 (8.75) | .03 | 0.060 | 0.253 |
|
|
|
|||
|
Bicep | –4.34 (6.97) | 0.01 (2.76) | <.001 | 0.218 | 0.529 |
|
|
|
|||
|
Waist | –11.91 (6.27) | –4.30 (5.20) | <.001 | 0.166 | 0.447 |
|
|
|
|||
|
Hip | –5.04 (8.46) | 0.58 (6.13) | <.001 | 0.092 | 0.317 |
|
|
|
|||
|
Thigh girth | –12.05 (9.07) | –12.95 (3.32) | .19 | 0.024 | 0.158 |
|
|
|
|||
|
Calf girth | –10.53 (8.25) | –2.86 (3.54) | <.001 | 0.336 | 0.711 |
|
|
|
|||
|
Scaled body average | –0.62 (0.42) | –0.046 (0.43) | <.001 | 0.162 | 0.440 |
|
|
|
a
bMeasured using multivariate analysis of covariance.
A total of 36 nonmodels and 65 models completed the usability rating questionnaire immediately after using the 3D portion of Somatomap (
Somatomap usability assessment results.
Usability questions | Models (n=65), mean (SD) | Nonmodels (n=36a), mean (SD) |
1. How easy was this app to use? (1 - extremely difficult to 10 - extremely easy) Please explain. | 8.4 (2.42) | 8.5 (1.76) |
2. What was your experience using this app? (1 - extremely frustrating to 10 - extremely enjoyable) Please explain. | 5.9 (2.34b) | 7.4 (2.43b) |
3. How much did you identify with the original avatar? (0 - not at all to 10 - completely) Please explain. | 4.9 (2.75) | 5.6 (2.34) |
4. How closely did the final avatar you created reflect your body? (0% - not at all to 100% - completely) Please explain. | 72.7 (20.17) | 75.2 (17.09) |
aTwo participants were unable to complete the user experience questionnaire because they needed to get to work; therefore, n=36 instead of 38.
b
In this study, we developed and pilot tested Somatomap, a novel mobile tool for assessing body image perception in both 2D and 3D. We tested this tool in female fashion models, who we hypothesized, given their profession, would have greater expertise with and therefore accuracy in estimating their body shape and size relative to female nonmodels. Both groups reported body concerns but in different areas, with models more concerned with the thighs/buttocks and nonmodels, with the abdomen/waist. Models were more accurate at estimating their overall body size, whereas nonmodels tended to underestimate the size of individual body parts, showing greater discrepancy scores for the bust, biceps, waist, hips, and calves, but not shoulders and thighs. Both groups reported high ease-of-use scores and felt that the resulting 3D avatar closely resembled their actual body, suggesting good usability experience with this tool. Overall, these pilot results suggest that Somatomap is feasible to use and capable of providing unique insight into how humans perceive and represent the visual characteristics of their body.
Body image perception is an inherently subjective phenomenon that is challenging to measure directly. To date, the standard methods for assessing body image perception in clinical settings have relied on verbal interviews, paper-based manikins, and still photographs [
We created Somatomap in an effort to achieve, as objectively as possible, an accurate digital snapshot of body image concerns, a quantification of perceptual accuracy between one’s internalized and actual body form at the level of individual body parts, and an ability to relate the two. Statistical body maps in Somatomap 2D identified female fashion models as having significantly more concerns about the thighs (especially the inner thigh) being too large compared to the nonmodels. This particular body concern may reflect a trend toward the desirability of having a “thigh gap,” that is, a gap or space between the thighs when standing upright with the feet together. For example, a 2015 online survey of 500 UK females found that 40% of women aged 16-65 years felt that they would feel more confident if they had a “thigh gap” [
These results in models and nonmodels may provide partial support for the social norm hypothesis, which states that judgments of body size/weight are influenced by visual proximity to different body types [
By facilitating the accurate measurement of attitudinal and perceptual aspects of body image disturbance, the Somatomap tool may allow for subsequent characterization of the underlying neural mechanisms in clinical and nonclinical conditions. For example, as the pilot results suggest, it is plausible that this tool should be sensitive to detecting overestimation discrepancies of specific body areas (eg, waist, hips, and bust) that have been noted in individuals with anorexia nervosa [
By providing better insights into the perceptual mechanisms, Somatomap may assist in the effort to uncover latent factors underlying body image disturbance in various psychiatric illnesses, reveal important information about illness course, and possibly contribute to the development of novel treatments. When developing Somatomap, we aimed to generate a mobile tool capable of deployment over a broad range of devices, physical locations, and settings (ie, research and clinical). The cross-platform compatibility and HIPAA-compliant encryption (via Chorus), along with the estimation that 80% of adults will own a smartphone by 2020 [
This study has several limitations. First, usability data were obtained from participants after using the 3D assessment portion of the tool. We did not collect separate usability data for the 2D assessment. Second, data collection occurred in a relatively small sample of women from the United Kingdom. Obtaining measures, and eventually norms, across a greater variety of different racial/ethnic, socioeconomic groups, and sexual/gender categories will be important for determining the generalizability of this approach to global populations. Third, the 2D manikin consisted of an androgynous figure, and it is unclear if a sex-specific figure would alter the type of assessments provided. However, having a consistently sized 2D model enabled us to perform statistical analyses across subjects more easily. Fourth, identification with the 3D avatar (before manipulation) was in the moderate range, and while it improved a lot after the final manipulation, it was not at the highest possible limit. Possible changes to further improve avatar identification might include offering more customizability of different features beyond the hair and skin color options currently supported in the generic avatar, increasing the number of areas that can be modified (ie, beyond the seven presented here), adding new body modification parameters such as height/length (ie, beyond the girth/width modification ability presented here), and improving avatar personalization, as it was recently noted that “personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts” [
Overall, these pilot results suggest that Somatomap is feasible to use and capable of providing unique insights into how humans perceive and represent the visual and size/shape characteristics of their body. Its advantages over commonly used tools include mobility; ease of use; customizable avatars that can flexibly represent users’ bodies with a variety of body shapes and sizes; and most of all, the ability to visualize and statistically quantify body image perception at the level of both individual body concerns (Somatomap 2D) and perceptions of individual body part size and shape (Somatomap 3D). Future clinical applications of this tool could include investigations of appearance concerns and body perception in disorders involving body image, such as eating disorders and body dysmorphic disorder. This potentially could be used both cross-sectionally as well as longitudinally to follow illness trajectory and changes over time with treatment.
body mass index
We would like to thank Kamilah St. Paul for helping with data collection, Shane Nearman for assistance with graphic creation, Hung-wen Yeh for statistical consultation, Dr Ruth Filik for her support, our human volunteers who provided body scans to generate the 3D avatars, and the professional modelling agencies and research participants for their contributions.
We would also like to acknowledge funding support from NIMH R01MH093676-02S1 (ACA), NIMH K23MH112949 (SSK), NIGMS P20GM121312 (SSK), NIMH RO1MH105662-03S1 (JDF), and The William K Warren Foundation (CRN and SSK). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
ACA is founder of Insight Health Systems, Arevian Technologies, and Open Science Initiative. ACA developed the Chorus platform, which is licensed from the University of California Los Angeles to Insight Health Systems.