
Jakob Sverre Løvstad
CTO, Seema
20 June 2025
When it comes to the science of diversity, it's relatively easy to communicate various findings from psychology, sociology, political science and so on. It's relatively easy to hear "fun things research shows that we can take with us". Men Something that keeps coming up as a challenge is understanding the scientific method when trying to figure out the state of affairs in an organisation. So here's the world's shortest and most straightforward description of how you can (well, "must" is a more accurate term) think - whether you're talking specifically about diversity or other "soft factors" like employee surveys or whatever.
1. Collection of data:
"The biggest weakness I keep seeing out there is that the survey itself has no validity. When you have a few questions that can be answered in a couple of minutes, you can almost swear that the instrument itself, i.e. the survey, is not validated. Or even worse: When you get various individual questions at regular or irregular intervals, you don't even have a real survey that could be analysed.
The thing is, if you haven't checked absolutely fundamental things like what you're actually measuring (individual questions can be interpreted in countless ways), that answers don't depend on the mood of the day (or other volatile conditions), that what you're asking has a scientifically tested correlation with something significant (such as performance or sick leave or something else the company cares about) and so on, then it's simply pointless to ask.
You may give the impression of relevant activity, but that's just playing to the gallery as long as you can't prove that the input data holds water.
2. Descriptive statistics
In all honesty, people are quite good at this. Descriptive statistics are any statistics that show "how things are right now". There are bar charts, pie charts, histograms and so on - anything that points out simple facts like "we have 40% women and 60% men in the organisation" or "profits have increased 3% from January to March".
Descriptive statistics really say nothing about why something is the way it is, but provide a snapshot of some phenomenon. This is fine, and occasionally provides some insightful eye-openers. For example, when we show that there is 45% diversity in the companies we have analysed on average, it's a bit like "wow, that's more than we thought". It's very cool, but it doesn't say anything about whether it leads to something good or bad, or whether it means anything at all.
The downside of descriptive statistics is that some people start to make unreasonable assumptions based on them. For example, if you see a wage gap or an invoicing gap between certain groups, it is tempting to take action - without knowing whether the cause makes action necessary/wise, or what action might be appropriate.
3. Completion statistics
Due to the problems described in point 2, we need what is called inferential statistics. This is the part of the subject where you actually look for correlations in the data, and can say with some probability whether one phenomenon is related to another. After all, it is completely scientifically irrelevant to say "329,000 people die of Parkinson's annually". What is interesting is whether you can say that "this gene seems to be closely associated with having Parkinson's and dying early from it". You can use that for something.
The same is true in organisational research. It can be very tempting for a manager to say, "Wow, our IT department is paid 10% less than our lawyers!". But if you don't look at correlations, this is just rubbish. It may be that inferential statistics show that the reason for the difference is that the IT department employs far more juniors, that the hourly rate differs between the groups, that the department itself has less time in the market (and thus less business) and many other things. As a rule, there are a whole range of reasons with different influences that make up a complex reason for what you see.
Of course, this also applies to our work with diversity: Everywhere we go, we find a whole host of different correlations that explain differences in well-being, meaning, motivation, turnover and so on between groups. Without knowing such correlations, taking action is like shooting in the dark (something that many managers also express their general frustration with in contexts where surveys are used).
4. Qualitative in-depth surveys
Once you hopefully have a handle on the correlations between various important phenomena in the organisation, it is good practice to conduct a few interviews across relevant representatives from the organisation to find out what the correlations mean at a detailed level. It's very common to hear a number of hypotheses on "why trust is related to psychological safety" (or whatever) in response to various statistics. But in our experience, this is pretty random stuff, and often not in line with what you get to know if you interview a representative sample and do a proper qualitative analysis. This is where you get down to the nitty-gritty and, given that you have the trust of the respondents, it's pretty straightforward information that can be converted into concrete measures.
The above is completely basic stuff, and simplified so much that statisticians may want to patch me up, but I think it's good to try to cover the very simple - so that those who are not total nerds also understand how you, or at least we, work with this here.