dimensions abstract graphic

Gaining an understanding of the most common attributes – ‘dimensions’ – of dyslexia encountered by professional colleagues in their work with students at university;

and how this is leading to a revision of the project’s research methodology.

Part 1:

Introduction:  Students with dyslexia tend to be poorly organized, often find reading complicated academic texts challenging, frequently report feedback from tutors about their essays being confusing or poorly-structured – there is plenty of research to support this.  But many other students also present these academic challenges to learning development tutors who are trying to guide them towards strong academic achievement in their studies at university.  As one of these tutors, I have my own, anecdotal evidence about the most common attributes that I have encountered both through my work with students with dyslexia at the University of Southampton and more recently through guiding students across the learning community at the University of Bedfordshire.  Reflecting on this is driving a re-think about the design of the main, research project data-collecting tool with a plan to now include a ‘belt-and-braces’  back up for identifying students with a dyslexic profile.  This will support my Locus of Control Profiler as a discriminator, but without resorting to the use of a standard screening tool for dyslexia.  To develop this, it seems useful to gain an insight into colleagues’ experiences of aspects of the dyslexic profile that they commonly encounter in their work supporting students’ learning at university so that I can build this part of my Main Questionnaire in a way that relies on more than my own experiences.  To find out more, I devised and deployed a short questionnaire to ask fellow learning development and study-skills tutors about ‘dimensions’ of dyslexia that they have most commonly encountered.

Existing rationale:  I am constructing the main data-collecting tool for this project and in doing so, I have been reflecting on the process that I had scoped out in my earlier, Research Design outline.  It has been clear from the outset of this project, that a significant challenge would be to establish the research group, ‘DNI’, that is, students who exhibit a typically dyslexic profile but who are unidentified as dyslexic.  All my critics so far have spotted this as a potential Achilles Heel for the project but I have maintained my confidence in the ‘Locus of Control’ Profile idea that emerged from the MSc Dissertation pilot study as a discriminator between dyslexic and non-dyslexic individuals.  However, these profiles were all generated from data collected from students who had been identified as dyslexic through conventional screening processes and as such, I have no profiles from non-dyslexic students to compare them against, and hence develop this process as a discriminator.

The plan to deal with this in the Research Design outline is to develop the earlier data collecting questionnaire to enable it to be deployable to both students with dyslexia and those assumed to be non-dyslexic by virtue of them having no association with the dyslexia support service at the university.  This development is necessary because many questions in the pilot questionnaire specifically refer to dyslexia; for example: ‘I don’t think my dyslexia makes me any more anxious than anyone else’ or ‘My friends know about my dyslexia’, and so these are to be rephrased to replace ‘dyslexia‘ with ‘learning challenges‘ which will retain the sense and meaning of each response item without any specific reference to dyslexia. In this way, it is felt that the 5 scales of this ‘profiler’ section of the QNR can remain relatively unchanged.

User_#23_regroupdata_300To recap: the 5 scales in the profiler are attempting to gain a measure of a respondent’s:

… with each of these 5 scales comprising 6 response items, the values selected then being combined to provide a measure on each scale. These 5 measures are plotted together to generate locus of control profiles that are unique to each respondent.

The profiles generated in the pilot study (MSc project that preceded this PhD research), were all built from QNR responses from students with dyslexia as this was the research group at that time and to date, it has not been possible to create profiles from non-dyslexic students to compare these to. The complete set of profiles is available on the project webpages here. However, given the wealth of published research literature (for example, see Banks & Wolfson, 2008, for an interesting commentary on academic self-perceptions) providing evidence that those with learning difficulties/learning disabilities/learning differences (depending on one’s academic background/geographical location/research perspective) exhibit different characteristic levels in any or all of these 5 attributes (scales), it seemed reasonable to assume that profiles built using data collected from non-dyslexic students would look different. Hence it is this difference in profiles between students with dyslexia (research group DI) and students without dyslexia (research group ND) that are to be used as the discriminator in searching for students with unidentified dyslexia (research group DNI).

I had originally intended to deploy two data-collecting questionnaires: the first to establish the LoC profiler as a discriminator for ‘the dyslexic profile’ and the second to test academic behavioural confidence (ABC, Sander & Sanders, 2003). In fact in thinking through the outline of the project right at the outset, much before I had fully considered the practical challenges in executing it, I had thought that three questionnaire deployments were going to be required and indeed, in an ideal world, this still would remain the best option and was detailed in the original research proposal for this project. These three QNR deployments would be thus:

  1. Deploy a modified version of the pilot study (MSc project) questionnaire to two groups of students: one group with identified dyslexia and the other with no indication of dyslexia – that is ‘ordinary’ students. Use the data collected to establish a) a typical LoC profile for a student with dyslexia and a second, typical LoC profile for a student assumed not to be dyslexic, hence establishing the profiler as a discriminator for The Dyslexic Profile. Retain these two data groups ‘on file’ so to speak as each would respectively form research group DI and research group ND.
  2. Deploy this same questionnaire again to a much wider range of students from a group assumed not to be dyslexic and from the data collected, identify students who exhibit a dyslexic profile but who don’t have any history of dyslexia being identified. Hence this would establish research group DNI.
  3. confindence_1_300

    Deploy Sander’s Academic Behavioural Confidence Scale questionnaire to the students in each of the three research groups and relate the analysis of the results to the project research hypotheses that: “students who exhibit a dyslexic profile but who are not identified as dyslexic present a higher academic behavioural confidence than students who are identified as dyslexic”.

But multiple questionnaire deployments present several practical challenges.

First of all, given that access to suitable student databases has been acquired, it would be a straightforward process to execute deployment ‘1’ above to students from each of these databases and from the data collected, generate the LoC profiles and examine them for the significant differences I am searching for between students with dyslexia and students with no indication of dyslexia. Thus research groups DI and ND could be established. However, in order to later deploy Sander’s ABC Scale questionnaire (deployment ‘3’ above) to students in each of these research groups, I would have to be able to identify each student in each of the groups so that they could be contacted a second time with a request to complete the second questionnaire.  This raises an issue about the necessarily stronger level of confidentiality of information disclosure required for questionnaire responses that are not anonymous which I am hoping to avoid, not the least because I think it likely that I will get a better response rate to the questionnaire if respondents know that their answers are anonymously received and can not be individually attributable to them later.

Secondly, which database would I use to try to find the research group DNI? Ideally this should be the complete student population of the university but the same issue about student identification and confidentiality arises – in fact, there may be an ethical dilemma too since the LoC profiler will be searching for and in theory, revealing students who, according to the profiler at least, are exhibiting a dyslexic profile unknown to them which places an obligation on the researcher to disclose this ‘possible dyslexia’ to these students. This is another issue that I am seeking to avoid as it raises challenges about how to deal with the psychological impacts that disclosure may create – which is over and above the main focus of the research.

In both situations above, identifiable students would then need to be individually contacted again and requested to complete the second questionnaire which they may be reluctant to do for a number of reasons not the least through an irritation about being asked to contribute disclose more information to the research again leading to disinclination to set aside the time required to do this.

However possibly a more significant factor is that since this is breaking new ground in dyslexia research, there exists the possibility that questionnaire deployment 1 (above) will not provide sufficiently robust data for the LoC profiler to discriminate between students exhibiting a dyslexic profile or not and hence would not enable research group DNI to be properly established.

Revised rationale:  So taking these issues into account, I have decided to revise the research methodology in the following ways:

  1. Combine the LoC Profiler with Sander’s ABC Scale into a single questionnaire and deploy this to the student databases just once;
  2. Build an additional section to this questionnaire to act as a back-up dyslexia discriminator to guard against the data collected through the LoC Profiler being weak;

In modifying the data collection process in this way, students will only need to be recruited once with no follow-up requirement and hence questionnaire responses can be anonymous.  However even though no names or contact details will be requested as part of the data collecting process, it is felt that there should still be a mechanism for identifying any particular QNR response in a way that is distinct from the unique data that it collects.  So to achieve this, a Questionnaire Response Identifier (QRI) will be built into the questionnaire that will be known to the student and which will be created by the form processor as part of the data. This QRI will be a randomly-generated number which, in order to reduce the likelihood of duplication, will be 8 digits long and will form part of the data that is sent when the respondent submits their completed QNR.  This is important because it will enable the respondent to be able to contact me to request revocation of the data that they have sent by quoting their unique QRI if they have a change of heart about participating in the research.  Anonymity will be preserved as the means for a student to do this will be through use of a Participant Revocation Form, a link to which will be included in the Questionnaire Acknowledgement page – a kind of ‘thank you’, or receipt, which displaces the questionnaire once sent and which displays the respondents QRI.  The respondent who wants to revoke their data will need to transfer their QRI into the form and submit it, again without any need to identify themselves. On receipt of their request to withdraw their data contribution I will be able to identify it from the QRI, find, remove and erase it.

This complete, revised process also now ameliorates a slight unease about the ethical dilemma of collecting data from people, albeit self-judged opinions, that may indicate an aspect of their learning profile that they have been previously unaware of – that is, the possibility that there may be a dyslexic learning difference present – and then not communicating this to them. (This point was also raised by members of my Registration Panel hearing earlier in the year.) By completely anonymising the data this possible issue is eliminated as when data collected from any particular questionnaire response does indicate that an individual student is indicating ‘dimensions’ of dyslexia neither they nor I will know who it is.  Indeed, this is the whole point of the research project – that is, trying to determine whether learners in the research group DNI ((possible) dyslexia NOT identified) do indeed exhibit a higher level of Academic Behavioural Confidence than their dyslexia-identified peers (research group DI) in which case this may lead us to conclude that it is likely to be academically advantageous for them to remain in ignorance of their possible dyslexia. Were it not possible to establish this research group in a way that satisfies the strict rules of ethical behaviour in research, the complete research rationale would founder.

Dimensions of dyslexia – finding out colleagues’ views:

I briefly looked at the Adult Checklist for dyslexia provided by the British Dyslexia Association which although persists in referring to a ‘diagnosis’ of dyslexia, thus continuing to allude to dyslexia in the context of disability in the medical model despite otherwise casting a very positive light on dyslexia as a difference, nevertheless provides a useful list of characteristics that are typically associated with dyslexia.

I adapted some of these characteristics and included others based on my own work at university supporting students with dyslexia to establish a list of 18 attributes which I have labelled as ‘dimensions’ and set these out in a questionnaire prefixed by the common stem statement: ‘In your interactions with students with dyslexia, to what extent do you encounter each of these dimensions?’ so that each of the 18 dimensions formed a leaf statement to combine with the stem (although in the questionnaire preamble I actually refer to the dimensions as ‘stem’ statements – I will adjust this terminology appropriately in the final version of the project’s Main Questionnaire later). Questionnaire respondents were asked to record the ‘extent’ of their encounters by moving a slider along a continuous scale ranging from 0% to 100% according to the guidelines at the top of the list of leaf statements:dysdimsQNRscreenshot

To start with, the slider is parked in a default position of 50% and moving it along the scale then displays the percentage in the output window corresponding to that position of the slider along the scale:


The 18 leaf statements, labelled ‘Dimension 01 … 18’ are:

It is recognized that this isn’t an exhaustive list and in the preamble to the questionnaire I was at pains to point this out, indicating that colleagues may have come across other common attributes during their interactions with students that I had not encountered in mine.  In order to provide an opportunity for colleagues to record this, I included a ‘free text area’ at the foot of the questionnaire with an invitation to record other characteristics or attributes together with a % indication of their frequency of encounter.

Once tested and adjusted for browser compatibility issues with the slider input, the questionnaire was deployed through a link in an e-mail sent to the most appropriate Student Service department in all UK universities, the list being identified from the Universities UK list of members at their webpages. In total, 116 e-mail invitations to participate in the questsionnaire were sent out in mid-August and to date (5th Sept 2015) 36 uniquely identifiable responses have been received although 6 of these appeared to be duplicates – respondents sending them twice I think – leaving 30 valid responses to be analysed.

Part 2 of this blog-post presents the data received with an initial analysis and preliminary discussion of the results. Part 3 of the blog-post discusses how the implications of the analysis will be used to modify the project’s Main Questionnaire.

Banks, M., Woolfsen, L., 2008, Why do students think they fail? The relationship between attributions and academic self-perceptions. British Journal of Special Education, 35(1), 49-56;
Sander, P., Sanders, L. 2003 Measuring confidence in academic study: a summary report. Electronic Journal of Research in Educational Psychology and Psychopedagogy 1(1) 1-17:
Smythe, I., Everatt, J., 2001, Adult Checklist, British Dyslexia Association, available at:  http://www.bdadyslexia.org.uk/common/ckeditor/filemanager/userfiles/Adult-Checklist.pdf, accessed on: 3rd September 2015;

1 Comment

  1. Pingback Reverse coding – Dyslexia and Academic Agency

Leave a Reply