Admissions Article as Teachable Moment


I was on vacation with my family this week when my seven-year-old son overheard one of his cousins calling another member of the family a “tool” and asked me what that meant. I tried to explain that the term “tool” can be used to suggest that a person is easily used or manipulated by others and that it’s another way of saying the person is a fool, or at least terribly mislead.

Since he still seemed confused, I used a recent example from admissions. I had just read an article in the Journal of College Admissions regarding score optional admissions, or admissions decisions in which SAT or ACT scores are not used as factors. the author also had a nearly identical article in Recruitment and Retention in Higher Education. Perhaps some may consider it questionable to have the same article under two different names in two different publications, but that’s not a reason to consider the author a “tool”.

In the article(s), the author raises ethical questions regarding colleges and universities that have implemented score optional admission policies. He asserts that the reasons for implementing such policies are entirely self-serving. He also notes that most institutions with these policies claim higher SAT/ACT averages when they only report averages of students who submitted scores.

I don’t doubt that some institutions have self-serving reasons for implementing such policies. I can speak for Mason, however. Mason, shameless plug here, may be the largest competitive institution in the nation to have implemented such a policy. We did it based on really solid data that standardized tests were not good predictors of success for our applicants with the strongest academic records. We allow students to substitute greater weighting on leadership experience in place of standardized test scores. So far, as with nearly every school that has implemented such a policy, we have found that students who we admit through our score optional policy do just as well as students with similar academic records admitted with scores. It appears, however, that the author never bothered to do a lick of research and has no idea of the data behind these decisions. Such sloppy and incompetent research could lead some to label the author a slacker, but not necessarily a tool.

His point on how colleges with score optional policies average their scores is even more ridiculous. He claims that not including scores from score optional applicants who are admitted in our admitted student averages is misleading. So, if I follow what the author is trying to pass off as logic, if I include scores in Mason’s averages that had nothing to do with students’ admission, I will somehow make my averages more accurate. These statistics, as a result of following his method, would be less likely to help future applicants understand their chances of admission. Such inept logic is most unfortunate, but again doesn’t brand the author a tool.

All of these issues were enough to get my attention and arouse my disdain for the articles. Imagine my delight, then, only a few days after I read the first of these wonderful articles to receive an email from the company that employs the author (official motto, “helping you get your future students to accept higher tuition and lower financial aid, no matter what kind of education you decide to provide”) inviting me to participate in a phone interview survey of higher education practices and the use of standardized testing, “on behalf of a client in the college admissions testing industry” (would that be SAT or ACT? So many choices!).

Fascinating. Let’s review. The author works for a company that consults for educational organizations, and his company has a contract with one of the standardized testing companies. Those companies seem to perceive (wrongly, I think) that score optional policies are a threat. Then he writes an article (twice!) condemning the policies that concern his company’s client.

Now I don’t mean to disparage the author, who I have never met and who I’m sure had very good intention in writing his article – both times. I will say, however, that by the time I was done explaining, my seven-year-old understood the use of these terms (or at least said he did, possibly in order to get me to stop explaining). The question is, do you? Be seeing you.

Advertisements

7 Responses

  1. I find this topic very intriguing. The website I blog for is going to pose this very question to students, teachers, and colleges to get their feedback first hand. My feelings, as a teacher-blogger, is that these test should be scrapped. There is a clear advantage to those who are good test takers or have coaching available to them. If colleges really “need” a test, they could devise their own admission test and administer it on line, leveling the field for everyone while still preserving the concept of a test. If you are interested in seeing our informal results, you can read about it at http://www.morethangrades.com in our blog section:
    http://tinyurl.com/c8n4e5

  2. I too find this interesting. I am in the process of completing my George Mason application and I am very confused as to what to select to give me the best opportunity. My SAT’s are probably average. Although, I feel they do not give a clear picture of who I am. My character and drive for success cannot be seen in my SAT’s.
    Thank you,
    Martin Everhart

    • I’ll post a more complete answer on who should apply score optional. In the meantime, generally we replace scores with your leadership experience, as seen in your essay and extra-curriculars (including work and community service). Students admitted score optional usually have very strong academic records AND that evidence of leadership experience.

  3. Colleges don’t just use their average standardized test scores not just to give applicants an idea of where they are. They use them as a comparison to other colleges. SAT/ACT scores are a huge part of the U.S. News survey which called GMU the “#1 school to watch.” Your policy removes the lowest scores from the average, giving your school an advantage over other schools. Currently, it seems to many people that you support standardized testing only when it is convenient for GMU.

    • Kimbo, you raise agood point. There is actually quite a bit of data that score optional programs drop out nearly as many good scores as bad, but let’s assume you’re correct. Are you saying that including a bunch of scores we didn’t use is MORE accurrate? Do you feel that this is a good way to compare schools, using factors that have been shown to be statistically meaningless in predicting student performance?
      Yes, it’s possible that not using the SAT scored for the very small number of exceptional student I admit through our score optional program might help us slighly in the rankings. Should we then drive instituional decisions by the rankings? If not, then not reporting those scores, regardless of what US News may do, is the responsible thing to do.

  4. This blog helped me understand what is needed to complete the application. Is it possible that the University will send me an application in the mail.

  5. Good observation. It would seem the author of the article in question had other motives for dismissing the scoring optional admissions policy. Allowing students to have greater weighting on leadership rather than standardized test scores is a great way to ensure students who have ambition as well as a wide variety of other skills are not overlooked for admission.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: