Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
College students are learning hard lessons about anti-cheating software (voiceofsandiego.org)
220 points by discocrisco on Nov 30, 2020 | hide | past | favorite | 306 comments


I see online synchronous 1-2 hour exams as a mistaken result of the "let's emulate in-person teaching in a remote setting" mindset.

You don't succeed in remote teaching and learning by trying to make it as close as possible to the in-person setup. You have to treat it as an entirely different problem.

Consider the synchronous exam. It is a perfect method of grading in-person. It is hard to cheat and all people take the same test in the same place. It is as fair as it can get.

In an online setting however, people can face all sort of troubles in a few hour window in their home. Your internet might stop working, neighbor might be making too much noise etc. Everyone takes the exam in a different setting and it is as unequal as it can get. It is also practically impossible to prevent cheating.


When I was in college in 2005 I had classes that could be taken online. When tested we were given a deadline, usually 24 hours after the test had been handed out. We could start whenever we wanted, take breaks, work throughout the day, as long as we had it in before the deadline. It seemed to work fine.


Yeah, take home tests were awesome. The problems on the ones I remember were ridiculously hard (specifically thinking about an algorithms take home that we were given a week for), but that spurred us to do some of our best thinking and learning.


It worked even finer for those who paid someone else to do the test for them.


How does one fib proficiency in a role as an individual contributor? I mean after school is through? I have been a software engineer for 14years. I am self taught with only a 2yr degree i completed my 4th year into this career. Can a line in resume(a degree mention) remedy a failed technical screening process?

Personal experience : I have declined several MD CS holders this year alone for positions because their demonstrated abilities were not up to par(and these were not particularly difficult questions).


If you're lucky you get a manager who doesn't want to rock the boat and just assigns someone competent to 'help' you constantly.


People cheat for all kinds of reasons. I'd say in many cases it bears little or no relevance to their proficiency as an IC. For example, I've seen people cheat in a Computer History class (compsci elective) presumably because they just want an easy A to push up the GPA. It doesn't necessarily mean they can't code.


Even if they can code, I don't think I would want to hire anyone who would cheat on something so basic just to push up their GPA. What is to prevent them from doing the same thing at work to get a better bonus/raise?

In the end it comes down to trust. We can't trust someone who is willing to cheat to get ahead even if they are a super coder.

- Suramya


With remote, anything is possible. Someone basically outsourced his job to China, paid someone a cheaper rate to do his work for him.


Yeah, incredible story... Could have lasted, but too bad he was a bit careless in handing out his company's personal credentials to the Chinese team as well.


> You don't succeed in remote teaching and learning by trying to make it as close as possible to the in-person setup. You have to treat it as an entirely different problem.

I agree at a high level, but practically speaking it's not realistic to expect colleges to completely reinvent their entire teaching systems for a temporary, 1-2 year remote learning period.

Now that we've been dealing with COVID for almost an entire year, it's easy to forget that at one point we thought this would all be over in a matter of weeks or months. The situation was also evolving in real time. Colleges were looking for the most efficient stop-gap solutions, not for ways to completely overhaul their learning experience.

> It is also practically impossible to prevent cheating.

It's a mistake to assume that because we can't eliminate all cheating, we shouldn't bother reducing any cheating.

The advantage of synchronous test taking is that everyone is exposed to the problem set at the same time. The obvious cheat with asynchronous test taking would be for one person to volunteer to take the test early and then send the questions to their peers, all of whom take the test at the last possible minute.

This happens whenever in-person classes offer two time slots for taking a test. The later time slot is always far more packed than the first and comes back with significantly higher scores. Adding the internet and screenshots/camera phones to the equation amplifies this because students can share the exact test, not just what they recall from memory.

> In an online setting however, people can face all sort of troubles in a few hour window in their home. Your internet might stop working, neighbor might be making too much noise etc.

Educators aren't oblivious to this fact. Working with students who have interruptions is just part of the job. Having someone lose internet isn't much different from having someone get a flat tire on the way to the test. It happens, we deal with it, and it's fine.

I also think you're not giving students enough credit. They're not dumb. If noise is a problem, they're going to use headphones. If internet is flakey, they're going to find a better location to take a test.

It's such a weird double standard to see HN champion work from home as unequivocally superior to working in an office, yet whenever the topic of learning from home comes up we get a laundry list of what-about possibilities that might make the experience worse.


> I agree at a high level, but practically speaking it's not realistic to expect colleges to completely reinvent their entire teaching systems for a temporary, 1-2 year remote learning period.

I don't really understand what you mean by this though. I graduated college in 2007, and all throughout my time from 2003-2007 I had half my courses online, including the quizzes and tests.

Online courses and asynchronous testing isn't something new to Covid, colleges have been doing it for well over a decade now.

Yes you had people try and cheat their way through asynchronous testing by having friends take them earlier, but why is that more of a big deal now than it was previously?


> I also think you're not giving students enough credit. They're not dumb. If noise is a problem, they're going to use headphones. If internet is flakey, they're going to find a better location to take a test.

They aren't, but many are much less privileged than you are making them out to be. My mom teaches students who join her class from their car parked as close as possible to get a weak Wi-Fi signal from their house because they have no quiet places at home.


> It's such a weird double standard to see HN champion work from home as unequivocally superior to working in an office, yet whenever the topic of learning from home comes up we get a laundry list of what-about possibilities that might make the experience worse.

This seems like a false equivalence; most employers don't use surveillance software to ensure their remote employees keep their microphone and webcam on, continue looking at the screen at all times, etc. One of the benefits of working from home is that your privacy is _increased_ compared to working an office; if everyone needed to allow their boss or HR or whoever to demand microphone and webcam access to me as they worked (and no, this is NOT comparable to Zoom meetings), then of course they wouldn't be praising work as much.


> Now that we've been dealing with COVID for almost an entire year, it's easy to forget that at one point we thought this would all be over in a matter of weeks or months.

I absolutely agree, but I have to say, I also think it's somewhat bizarre that anyone ever thought that. I know it seemed weird to me at the time.

Where exactly did everyone think the pandemic would go after a few months of lockdown?


> Where exactly did everyone think the pandemic would go after a few months of lockdown?

We thought we'd combine lockdown with tighter quarantines for visitors and excellent test & trace systems with quarantines for people with covid and that we would, like several countries, get covid under control.


I TAed this semester and a surprising amount of students have totally terrible internet. One poor kid had to attempt the blackboard exam a dozen times due to connection issues. Even my home network gets saturated with everyone in the neighborhood working from home, which makes zoom calls impossible when lag is spiking, despite having the fastest internet my limited ISP choices offer me.


In a college scenario, perhaps you can say that dealing with problems is the students responsibility, but for k-12 schools it is the responsibility of the school to provide education even to children with no internet or headphones.


Absent COVID, institution should have been (and many have been) embracing remote learning anyway.

This should have been an option all along, with out the need for a pandemic to force their hand.

>>The advantage of synchronous test taking is that everyone is exposed to the problem set at the same time. The obvious cheat with asynchronous test taking would be for one person to volunteer to take the test early and then send the questions to their peers,

This problem has been largely solved for a long time, because as you noted it is generally impossible to give a test to EVERYONE at the same time.

Thus properly written tests will draw a random selection of questions from a larger pool, the ratio between Pool:Questions the better the security. (i.e a 25 question exam using a 50 question pool is not as secure as a 25 question exam using a 200 question pool)

This method is also used for standardized tests given at the same time, as it cuts out the problem of shoulder surfing or other in-person cheating methods.


PragmaticPulp doesn't seem to have a grasp on how college systems work. From what I have seen UW, Oregon State, Seattle Colleges, etc were already employing question banks and timed test windows rather than resorting to poorly functioning half measures like Respondus LockDown Browser and its ilk.

Many colleges were already fully capable of distance learning in multiple forms, whether through correspondence courses (what WGU often pitches, complete the project or test and bypass the class, though some of their certificate partners abuse test takers with Respondus or similar) or online learning with systems like Canvas.

Decent colleges offer a mix of these, I can attest to the quality of these programs at Seattle Colleges (specifically North Seattle College & Seattle Central). There is little value in building a panopticon of surveillance in higher education, especially when these divert resources that would otherwise enable students to better master the subject.


Most colleges offered correspondence courses in the pre-internet era. You did everything by mail. There was no proof that you did the work yourself; it was just a matter of trust. I took a required writing course that way because it never worked out to schedule it as a regular class. I think for some of the classes that had exams you maybe went to a local exam center where a proctor would check your ID and you'd take the exam. I don't know what they did if there wasn't an exam center conveniently nearby.


We lost power (and internet) 45 seconds before my son's Physics final last spring during remote learning. He had to quickly scramble to take his testing materials outside and used his cell phone to access the exam.

It worked out for him but easily could have been a disaster. We live at the edge of the school boundary and it was a localized outage due to a car hitting a pole, so he was the only student in his class that was affected. Good luck convincing the teacher if you don't have alternate access to the test.


Are modern teachers really that robotic that they dont understand that 'things happen' outside of their limited domain of expectations ?


I think this is a pretty big miss on where the issues with these sorts of systems are - I'm sure some teachers are phoning it in but the student in the article (Molina) was immediately awarded an F and then had to go through a two month appeal process to get that undone. Once he was talking to a person things moved pretty quickly but the backlog of cases results in a lot of unnecessary stress on students.


If you’re a straight-A student, it’s probably not an issue. Like cutting up, or seen as an “average” student, your excuses are just that, and they aren’t excused.


Teachers? Most developers I know have exactly the same blind spot. "What do you mean there's no internet connection?"


My uni had me TA this semester which included proctoring exams over zoom. They gave me zero info what I’m supposed to do or even look for while sitting there in the zoom room. I think I was just there to add some semblance of authority. Total waste of time.

Its stupid easy to cheat in this remote world: write your cheat sheet out on a piece of paper and put it just out of sight of the webcam. No technology can beat that.


Crazy idea in this space. Eye-tracking will eventually be standard on consumer VR headsets that can be had for $300. As someone following the industry closely, I believe that the Oculus Quest 4 or a device like it will certainly have eye-tracking support. Obviously if one were to take their test in a VR environment with their eye-movements tracked it would be almost impossible to cheat. Eyes also provide a good biometric identifier. Mirrors, screens, exploits, etc can be foiled using software locks, HMDs cameras and imu data. Of course I'm not advocating a testing system like this, but it would put a hard stop to cheating on synchronous exams, enforce some level of standardization, and should be accessible to every student in the 1st world. Understandable privacy and ethics issues with something like this. It is probably better to address why people cheat in the first place and make education less reliant on exams. (continuous low-impact variable feedback vs discrete high-impact fixed feedback)


Good luck wearing a VR headset for a full 2 hour exam. I get headaches if I wear one for more than 10 minutes, and the weight is uncomfortable and messes with my hair. More surveillance technology is not the solution.


My wife, who has issues with motion sickness, would absolutely love this approach - she'd be effectively barred from taking exams due to getting headaches and throwing up whenever she attempted it.

Anyways, I'd prefer an approach that led to less invasive face tracking - even if it suffered from a lower detection rate.


A few points:

Their "powerful AI engine" is almost certainly just humans. It might have a few off-the-shelf components like face detection but most of what they claim to do is just so easy to outsource that almost all companies do it. If there is any delay between the system observing a suspect behaviour and the student being told to correct it then they are definitely using humans.

An institution using a service like this is a huge red flag. You should take it as an indicator of a low quality administration if not a low quality institution.

As an engineering problem this task is hard. Ryan Calo (Prof of Law, UW) once presented a fascinating bit of research on trying to automate something as simple as fining someone for speeding. Given perfect information how do you build the system? If someone exceeds the speed limit for 1 second, do you fine them? If everyone around the person is exceeding the speed limit do you use the same rules? If someone oscillates between just above and just below the speed limit, how many times do you fine them? If someone exceeds the speed limit and stays there does this result in fewer fines? How do you square the code written with the law as written? The problems are so extensive it may be that application of rules like this require human level judgement. Proctoring an exam may turn out to be an AI-complete problem.


> Their "powerful AI engine" is almost certainly just humans.

I wonder how parents feel that an anonymous foreign, most probably male, can watch their daughter in her room working on her exams and she can't opt-out else she'll be treated like a criminal.

Wonder if at some point we'll find a zip file shared among employees with screenshots of students from one of these outsourced proctoring vendors.


> I wonder how parents feel that an anonymous foreign, most probably male, can watch their daughter in her room working on her exams and she can't opt-out else she'll be treated like a criminal.

Why does foreign matter?


>Why does foreign matter?

No or weak legal recourse in the event of wrongdoing being discovered.


Exactly.

Plus the outsourced entity can just go bankrupt and you'll never hear about it again.


No legal recourse, and cultural differences. An adult male in Japan propositioning a 13 year old for sex is legal in their current system, whereas that doesn't really fly in America.


Just an aside: all populated areas of Japan have a age of consent from 16 to 18 that supersedes the federal 13 age of consent.


Because it is another way for OP to say "brown"


Why does everything has to end up into white supremacist vs. SJW crowd arguments?


Uh, okay. This is kind of a weird take on the whole thing.


It's a valid complaint.


What do you mean by weird?

Sick, twisted? That's the complaint exactly.

Unlikely? Too many things like this happen to dismiss it.

Something else?


It's happened plenty of times before.


It's definitely a face detection thing for some of them. My spouse had to deal with all of her black students being hectored nonstop by the software because it had a hard time reading dark skin faces and was constantly interrupting them to accuse them of not looking in the correct direction.


I've seen something related actually play out 20 years back. My dad needed to drive through a particular toll booth multiple times one weekend. He figured out that same weekend you could go pretty fast and the ezpass would still register. So he went fast every time.

A week later he got a half dozen letters in the mail all at once:

* warning do not speed * warning (+fine) * final warning (+fine) * ezpass revocation (+fine) * etc.

I think he ended up arguing that he didn't get the first warning before the others, and they agreed to waive the fines and roll back to the first warning. I.E. human judgement applied after the fines "fixed" the issue.

In any case: toll booths are a workplace, there's people there, go slow.


This is getting really far off topic, but the solution is clearly to make speeding fines a "dollars per mile per hour, per hour" system. It should scale continuously both with speed over the limit and with time spent speeding. Programmers who focus on discrete systems are too prone to forget about real analysis. ;)


So, on a 65mph highway, driving at 66mph for an hour has the same fine as driving at 125mph for one minute? (Minus speedup and slowdown time.) Don't think a linear scale would make sense.

I'm not sure if any scale would make sense, though. The true thing to be fined for should be "driving unsafely", of course, whether that means driving too fast, braking suddenly, swerving, changing lanes without warning, etc. The thing is, the unsafety of all those things depends in very large part on the cars around you and sometimes on the road. (Driving at 100mph on a straight, flat freeway with no cars nearby is safer than weaving between a bunch of 65-70mph cars to maintain a speed of 80mph.)

But it would be really hard to come up with an objective algorithm to calculate how unsafe a rule violation is, and even harder to implement it without a vast array of cameras. So ... they implement an enforceable set of rules even if it's not a good one. (On a related note, it bothers me that driving with more than some arbitrary blood alcohol level is illegal, but driving after e.g. staying awake all night is not, even though the latter is worse[1]. Likely this is partly because checking BAC can be done fairly directly with cheap equipment, while checking how recently someone slept is... I dunno, there might be ways to mostly do that with good equipment, but I assume it's currently impractical.)

[1] "Being awake for at least 24 hours is equal to having a blood alcohol content of 0.10%. This is higher than the legal limit (0.08% BAC) in all states." https://www.cdc.gov/sleep/about_sleep/drowsy_driving.html


Momentum based tickets. The heavier your vehicle, total passengers and load, the higher the fine.


The point is to immediately stop them from speeding and causing a big problem. Why is the world would you track them for miles or hours??

The ticket is merely an incentive to not do it in the future.


This is clearly not the solution, unless it is accompanied by changes to speed limits in North America- which itself introduces a bunch of complexity and negative side-effects.


It won't solve speed limits as a civic policy issue, but it does solve the specific thing the question was posed to ask: the morass of trouble created by trying to write discrete rules for continuous phenomena.


Or you could just set the speed limit appropriately, at a level that most drivers will naturally conform to.

This optimizes for both expediency and safety, but, sadly, not for revenue, or for readily-available probable cause to pull over essentially anyone the police want to pull over.


> Or you could just set the speed limit appropriately, at a level that most drivers will naturally conform to.

The problem is that such a limit would not be a safe one, you would basically be replacing the limit with a sign that posts the rate that people like to drive at.


Speed limits on the I-15 in Utah have successfully been raised to 80mph without any negative impact to driver safety.

Turns out that people do drive at a reasonably safe speed, generally.

Source: https://jalopnik.com/utah-raises-some-speed-limits-to-80-mph...


>The problem is that such a limit would not be a safe one,

According to who? The minority who wants the limit set elsewhere. Things like "what speed is safe and reasonable" are matters of social consensus. The "experts" can pontificate all they want and the vocal minority can gripe all they want but the median or average person and society at large is always going to be right on matters of social consensus. It's a tautology.


According to dead pedestrians, mostly.


The problem is that posting a lower limit doesn't really stop most people from driving faster. Then, some people will obey the speed limit and they become hazards. Other people might believe the posted speed limit tells you about the speed you can expect cars to go on the road. It doesn't.

The main safety increase I can see is with respect to commercial trucking. Not all vehicles have the same acceleration and braking ability. But the fact that people just ignore the speed limit is a hard problem.

So, yes, as far as is practical, that's the idea. We just post the speed people are driving at.


There is a real difference, in both actual safety and perceived safety, between divided roads and undivided roads with the same number of lanes. Wider lanes seem (and are) safer than narrower lanes. See https://en.wikipedia.org/wiki/Traffic_calming for more.

Reasonable drivers conform their driving habits to the environment.

Unreasonable drivers should not be allowed to drive.


> There is a real difference, in both actual safety and perceived safety, between divided roads and undivided roads ... Wider lanes seem (and are) safer than narrower lanes.

> [Wiki] Traffic calming can include the following engineering measures ... Narrowing traffic lanes ... Converting one-way streets into two-way streets forces opposing traffic into close proximity, which requires more careful driving

What is happening here? Did the engineers forget that they were supposed to optimize for safety, and instead optimized for getting people to slow down? Does the net effect of this improve safety, or worsen it?

If everyone drives as fast as they can while meeting some perceived safety level, then does that mean these efforts are unlikely to affect safety and merely to slow everyone down? Is the best set of measures the maximally deceptive set, which looks as dangerous as possible while being as safe as possible?


> If everyone drives as fast as they can while meeting some perceived safety level, then does that mean these efforts are unlikely to affect safety and merely to slow everyone down?

No. It means that people are usually wrong when they judge the safety level of their driving speed on a nice clear road that happens to have pedestrians next to it or trying to cross it, but are more accurate at judging safety when they are primed to expect obstacles, steering challenges, other actors moving not in parallel with them on the road itself.

https://www.ite.org/technical-resources/traffic-calming/


Do you have any evidence of this?

My understanding is that speed limits have been largely unchanged for the past 50+ years. Car safety, on the other hand, has increased by a great deal.

It seems likely to me that speed limits are conservative when it comes to safety.


I imagine because they are catering to the lowest common denominator, the human behind the wheel.


I suspect it has more to do with the incentives faced by those who set the speed limits.

I don't know exactly how it goes. If it's chosen (or heavily influenced) by politicians, I suspect politicians can win some votes by saying they'll improve safety by lowering speed limits (particularly around schools or other places where children might be); while they'd be less likely to win as many votes by saying they'll raise the limits (exposing them to the risk of their opponents calling them reckless/irresponsible). If it's chosen by non-elected officials, that's less of an issue, but something like it may still be there; or it may be that raising the limit and then there being a fatal accident will damage your career, while lowering the limit and irritating everyone will not damage your career.


The speed limit is (in the USA) supposed to adhere to the 85th percentile rule, that being the speed at which people drive on a free-flowing road with no traffic and no enforcement. Most freeways have speed limits that are probably closer to the 0th percentile than the 85th percentile. I know driving the speed limit on some roads in my neck of the woods would be utterly hazardous.


People like to drive at the rate that is safe. Humans are pretty good at measuring risk.

I've driven on many roads in countries that don't have any speed limits. The overwhelming majority are driving a speed that makes sense given the road, driving conditions, etc.


>Ryan Calo (Prof of Law, UW) once presented a fascinating bit of research on trying to automate something as simple as fining someone for speeding. [...]

Most of the problems you've listed only exist because of expectations caused by inconsistent/lenient enforcement by human police officers. People don't seem to have a problem with strictly enforced rules in finance. eg. "your bank account can't below zero without triggering a overdraft".


> People don't seem to have a problem with strictly enforced rules in finance. eg. "your bank account can't below zero without triggering a overdraft".

I think a lot of people have problems with strictly enforced rules in finance. Especially when banks re-order daily transactions in order to maximize overdraft fees.


>I think a lot of people have problems with strictly enforced rules in finance

They have problems with it because the rules cost them money, not because it's hard to understand or rigidly enforced. If the rules just resulted in a "transaction declined" they wouldn't really care either way. For instance, I don't think anyone thinks that it's unreasonable for your debit card to get declined if you don't have enough money.


I think you missed the reference to reordering.

In the case referenced, there were 3 transactions on one day, 2 of $10 & 1 of $20. User has $20 in account. Do you take the true chronological order of transactions (10/10/20) & charge 1 fee, or do you go highest to lowest (20/10/10) & charge 2 fees?

A bank executive would likely argue why it's 'fair' to charge 2 fees because a business day is the relevant period for a bank (closing at end of day, etc.) & that method of accounting is mentioned on page 85 of the checking account TOS that a user signed.

Many consumers would argue that the bank's relevant period is meaningless, especially in the age of computers. They would also argue that it is unfair to lay out complex rules like this in an opaque way because of the asymmetrical information advantage that a bank has. They wouldn't complain that the rule was enforced per se, they would complain that the rule (which is plainly anti-consumer) exists.

Same is true of this cheating stuff - I think in general, students want a fair platform for grading. They just want that platform to be actually fair.


> They would also argue that it is unfair to lay out complex rules like this in an opaque way because of the asymmetrical information advantage that a bank has. They wouldn't complain that the rule was enforced per se, they would complain that the rule (which is plainly anti-consumer) exists.

Doesn't that justify my point? If we go back to the speeding example, if the rule was that you can't go over the speed limit (within the capabilities of the measuring device), then everyone would drive a little more slowly. The only reason people 5-10 miles above the speed limit is that 5-10 miles over is generally accepted to be "fine". If anything strict enforcement of speed limits reduce the amount of room for abuse by law enforcement (eg. pulling over someone for going 1 mph over).

Also, at the risk of victim blaming, maybe it isn't such a good idea to have your deposits/withdraws lined up on the same day? Deposits can get delayed/withheld, and withdraws can be moved up unexpectedly. Leaving zero days between a deposit and a withdraw is just asking for trouble. I agree that reordering the transactions from largest to smallest is probably greed motivated, but at the same time expecting it to behave differently is optimistic at best and foolish at worst. It's the equivalent of relying on undefined behavior in programming (eg. assuming that reading one byte after the end of an array wouldn't cause a fault).


> Also, at the risk of victim blaming, maybe it isn't such a good idea to have your deposits/withdraws lined up on the same day? Deposits can get delayed/withheld, and withdraws can be moved up unexpectedly. Leaving zero days between a deposit and a withdraw is just asking for trouble.

I'm not sure you understood the example that GP gave; there was no mention of withdrawals. The idea was that you have $20 in your account, you buy something for $10, then later buy something else for $10, and then later buy something for $20, and the bank reverses the order of applying the transaction and says that you made two purchases after your account was empty, so you get charged two overdraft fees.


Have you considered blockchain? /s


I took an online class in 2015 that used Proctorio. Even at that time the proctoring/anti-cheat software was a nightmare. I was semi-accused of cheating because of "anomalies" during one of my tests. It was quickly cleared up by talking to the professor.

The software is a privacy disaster and any computer that has had any of this spyware installed should be considered compromised. I kept a separate hard drive and would swap it in to take tests.

To fix the problem, grading measures need to be changed to accommodate the new world of online classes rather than trying to shoehorn old test proctoring into a remote space. This software only stops bad cheaters anyway.

I'm already imagining the fights I'm going to have with my daughter's schools in the future when they ask us to install this malware.


The main problem is that receiving a credential and receiving an education are very different things that we do at the same time out of tradition and convenience. When testing was easy, it made sense to lump it in with teaching, but now that testing is impossible, I say we ditch it for the time being.


> The main problem is that receiving a credential and receiving an education are very different things that we do at the same time out of tradition and convenience

Unfortunately I think most of the world sees them as one thing :/.


The problem with that is that you're throwing out the one that is more directly valuable to your customers.


No, I'm throwing out the one that is impossible to do.


What if you only have linux computers at home and can't run their spyware?


This is precisely what I said to one of my professors this semester and thankfully they were reasonable and didn't use Respondus for either of the exams like they had planned.


If you have a linux computer you can boot from usb or a second drive to run windows. That's what I did when I had to take exams using Proctortrack.


While I agree with you that most people who run Linux as a daily driver might be capable of running Windows as a secondary OS on occasion, this discounts the fact that he or she would be required to purchase the license (else break the law), and also ignores the fact that they should not be required to do so


That hasn't really stopped colleges before. I've had quite a few classes where the textbook cost more than a Windows license.


You’re right, but I still think it is not fair for the university to require you to run certain software if it does not directly benefit your learning


While I'm against the use of crappy anticheat software, the windows solution would be easy for institutions with volume licensing. Perhaps a fair middle ground is to force institutions to provide bootable USB drives so students avoid corrupting their own installations.


Recent versions of Windows have made it a lot harder to install through USB. A month ago I tried to install Windows 10 onto an old laptop SSD mounted in an enclosure and connected through USB so I could play a games with my brother online that was only available through the Windows store, but the installer literally refused to install to it because I was using the Home edition of Windows instead of Enterprise. I eventually found a solution through some freemium backup software that allowed copying an installation on a regular hard drive onto a USB, but weirdly the first software I tried that claimed to have this feature did not yield in the SSD being bootable, so I had to find something else that was able to do it properly. Given that Microsoft went out of their way to disable doing this directly in their installer, I wouldn't be too surprised if it the workarounds continue to become more difficult or even impossible without paying for the Enterprise version (which is not something students should ever have to pay for).


Education = Enterprise edition for all practical purposes, so they wouldn't have to pay.


step one: purchase windows

honestly, I think the question was more of "what should the educators do?" not "what should the student do?


I've never dealt with this software but when I did my CS degree, several of the applications we worked with throughout the course were distributed as Windows binaries. If you'd ever done homework or study at all, you don't only have Linux.


> If you'd ever done homework or study at all, you don't only have Linux.

Speak for yourself; at my school, even the class that used C# worked with mono.


I never need to use Windows or MacOS for any of my classes in college (although I initially did for the first few months of freshman year before I first started using Linux). For some classes, the students using MacOS or Windows actually had to jump through more hoops; one class required us to ssh into the school's Linux server for some assignments, which required Windows users to download extra software due to not having an ssh client by default (at least, not back then; I'm not sure if powershell has one now), and in another, we had to use Xilinx, a proprietary IDE for writing Verilog to design circuits for FPGAs, which only had builds for Linux and Windows, not MacOS. For the latter one, the TAs literally distributed flash drives with an Ubuntu VirtualBox instance for the Mac users (who comprised the majority of CS students at my university).


> several of the applications we worked with throughout the course were distributed as Windows binaries

Because most linux users use their distribution's packages and it is normally easier than hunting for windows binaries.

Need an assembler? `sudo apt-get install nasm`, etc. I am a math/cs undergrad and have not used a windows machine in over 6 years. (In my case it would be editing either editing my home.nix or creating a shell.nix for a course rather than apt/dnf/etc)


> If you'd ever done homework or study at all, you don't only have Linux.

If something applies to your low quality university it does not mean that it applies everywhere.


Your experience does not match mine at all. I earned a CS degree with a computer that only had Linux installed and I never missed Windows.


Here’s the thing, if you want to cheat, you’ll find a way to cheat. This won’t stop anyone, it just stops casual cheating at a pretty hefty price.

Hell, just reading this article, I thought of various ways by having someone off screen listening to you read aloud and a projector mounted after the test starts, on the ceiling by another person who’s off screen.

You could have a kvm switch to a whole different computer, an AI overlaying data on the screen (reading from the video output, and injecting into the video output), or any number of ridiculously complicated setups.

It probably won’t catch any serious cheaters.


Picture in picture monitor with multiple sources. Credit to someone who mentioned this in a previous thread.


Most proctoring software that records the student would trivially root out the cheating method you mentioned about reading aloud the questions to someone else. They usually record audio as well, and if the software detects significant microphone input, will flag the test.


Just mute your mic in hardware?


A lot of proctoring software will flag you for that as well. Then when the professor reviews the footage it will be obvious that you're talking to someone off screen.


Do they analyze the noise floor or something? I guess you'd have to defeat that with a noise gate.


Put your mic in another room then.


No need to overcomplicate it. These technologies can be overcome with a phone in your lap connected to a groupchat with everyone in the class, or even a piece of paper with your cheat sheet.


Fuck grades.

They were already a problem in universities and this is their final and worst form.

University systems have the major problem that a significant portion of students are only there for a degree, and their participation is playing the game in order to get that piece of paper, and the GPA number rating them.

The core of this problem is the question “will this be on the exam?”

Testing of course can be an important part of learning, but making that the metric by which you decide to hand out degrees substantially damages the value of testing as a teaching tool, and damages the value of a university as a place for research and learning.

Another way needs to be found to sort students into degree worthiness.

Raising humans for the first quarter of their lives in a dystopian police state is not what anybody should strive for. We need to figure out how to measure people less.


Got a BS/MS in CompSci in my twenties when I was immature as a person, now doing online classes for a non-tech major. As an "adult student" the system seems 10x more broken than it was when I was a high school graduate.

Being young and tuition-free it was easy to grind through the material just worrying about the next test in the next class. Now, here I am as a tricenarian trying to relax and understand hundreds of pages of anatomy material while everyone else seems to just binge seems so asinine. After 75 pages of reading, exams continue to be a black box. Here I am paying for instruction and the professors can't be bothered to tell me what's important to know for the career let alone the test.

I got one teacher to chat off the record and he basically said to "get through it." What a disservice to students. Why are there still weeding-out classes? How many doctors, surgeons, programmers, engineers would there be in the world if it wasn't for these classes? The Montessori education system and the like seem so superior. College should be pass/fail.


I agree with a lot of your comment, and I basically think large swathes of education are just fundamentally broken for a variety of unfortunate reasons.

That said, I would like to answer this question:

> Why are there still weeding-out classes?

In majors that are oversubscribed (e.g., pre-med, CS, etc.), these classes weed out the people who like the idea of the major/career more than the actual work involved. Good weeder courses (and I think that these exist) allow later classes in the major path to be more focused on the capable and motivated learners rather than baby-sitting the less-capable and less-motivated.

There are two reasons why this is desirable:

1. There is no way that a department could get through the same amount of material if they had to accommodate the less motivated and less capable in all of the courses.

2. Your department will get a reputation in the field for having students who are capable of X, Y, and Z.

Typically, if someone can't make it in the weeder classes at their school and they still want to enter that career path, then maybe it's better to go to a less competitive school. This sounds elitist, and maybe it is, but that's the workaround in our current system.


> 1. There is no way that a department could get through the same amount of material if they had to accommodate the less motivated and less capable in all of the courses.

This is the high-school-ification of university, and it's part of the problem. This is not mandatory learning / babysitting. Just stop accommodating them. Every class just moves at the pace it needs to move at, and if they can't keep up, so be it.


For my own notes into posterity, on further reflection, maybe the schools aren't accommodating. Maybe weed-out classes are hard because they're just moving at the pace they need to move at. But since they tend to be foundational classes, they have a lot of area to cover and so move relatively fast.


This summary captures the intent of my comment accurately.

I will add that the follow up classes can also move more aggressively since the weaker students have been culled by the required weeder course.


It sounds like you are in a pre-med class?

My SO once TA'd for a pre-med class. Holy hell, the students were something else as compared to other disciplines. They literally would not leave my SO's office until they had argued up their grades on every assignment. My SO had to get security to escort them out, I am not joking. If they forgot, say, to multiply by 2 and then got the answer wrong, they still argued for full credit despite the very clear error.

There was no shame whatsoever. We figured the other Profs and TAs just didn't want to deal with them, so the pre-meds learned to just wait out the graders to get their grades.

To be clear, this was maybe ~15% of the class, but still a lot of students. The Prof was no help with this either and just advised to pass them along. What a mess.

I understand that med school admissions is about as cut-throat as it gets, but my lord! It's these pressure cookers that lead to MD burnout and mis-selection. After all those shameless hours in undergrad, then med school hell, you finally get to practice on people just looking for their preferred opioid.

Sorry to hear about the mess you are in, but you are right, it's systemic.


I experienced the same thing while TAing physics for biosciences for a few semesters (which was 90% pre-meds), though I never had to call security.

One apparently needs close to a 4.0 GPA, plus many extracurricular volunteer hours to qualify for medical school, plus glowing letters of recommendation. I also overheard a number of stories about pre-med students intentionally sabotaging other students by misleading them to think that private group tutoring sessions (to study for the MCAT) had been canceled.

That experience helped form my beliefs about the driver high cost of medical care in the US.

Specifically, that there is no shortage of people who are willing and qualified to be medical doctors, but that doctors and medical schools collude to artificially limit the supply of credentialed medical doctors as a means to increase doctor salaries.

As you alluded to, I think it also creates an adverse selection problem, where the people who become doctors are mostly not those with excellent diagnosis skills or bedside manor or ethics, but those who are the most cutthroat and desperate to get into a high-paying career.


> I also overheard a number of stories about pre-med students intentionally sabotaging other students by misleading them to think that private group tutoring sessions (to study for the MCAT) had been canceled.

These aren't rumors.

Pre-meds hate other pre-meds and are conditioned to play a zero sum game against one another.

We then expect them to completely flip their attitude and become teams players. Collaborate with other medical professionals (former pre-meds), nurses and social workers.

No wonder healthcare is so dysfunctional.


> I got one teacher to chat off the record and he basically said to "get through it." What a disservice to students. Why are there still weeding-out classes? How many doctors, surgeons, programmers, engineers would there be in the world if it wasn't for these classes? The Montessori education system and the like seem so superior. College should be pass/fail.

Honestly, that's typical of most Biology lower division courses. The theory goes 'if you cannot pass this, you have no chance of the MCAT.' which may be true but some of us had no desire to be a physician. Someone replied to such an absurd structure explained to me that's 'how the system works.' Many of us that finally passed would not have anything directly to do with medicine and would AT BEST be considered ancillary service status to Medicine, and we were not amused. I personally will only take Industry-centric certifications, and I will never consider going back for anything degree related.

Google really is making the smartest move out of all the FAANG corps in that regard, by placing grater value on their own rather than degrees.

With that said, that's really what its there to do, isolate those who can commit to high loads of dated and rather antiqued (my near College credit HS Biology graded us on Kingdoms as Domains were already well established in the University lectures I attended) rote learning the best and 'thining the heard' for the upper-divisions which were super-capped when I was in University as all fields in Biology were impacted.

I at least thought now with Online learning being the default for so many it's sad to see the tech still hasn't solved this problem of mainly distribution/class seats.

I think it's one of the 2 elite schools in the UK (Cambridge or Oxford) that have an actual in-person skill assessment for your final rather than a test and most of your grade depends on that. My Inorganic Chemistry professor said his final exams were always the most preferable, albeit impractical, way to go about it. As a regular at his office hour lectures, I wish I had the pleasure of seeing what that may have been like.

> My SO once TA'd for a pre-med class. Holy hell, the students were something else as compared to other disciplines.

Been there, seen that as a Biology major just trying to get my foot in the door to get a graded paper during office hours. It's really sad now that I think about it as an adult that left the Industry long ago, you have no idea the weight on those kids shoulders to make it pass the openly touted 'filter' process, be it self-imposed or from their family. Its a sad reality how few end up using it all in these 2 horrible job markets for young inexperienced graduates.

My heart goes out to anyone who has to graduate in this Market, 2008 was bad but I think this may end up being worse as automation wasn't as big of threat as it was in 2008 as it is now in 2020 with COVID.

PS: Also, its well known amongst Biology majors that adderall abuse doesn't just start in Medschool, its a well established habit by undergrad possibly even the HS level for some.


> Google really is making the smartest move out of all the FAANG corps in that regard, by placing grater value on their own rather than degrees.

Sorry could you explain that? Do Google now run courses and place value on having completed them in interviews?


> Sorry could you explain that? Do Google now run courses and place value on having completed them in interviews?

Yes. They are slowly offering those options along with other tech companies.

I think Shopify has the best model: https://devdegree.ca/


Is it a real degree or some type of bootcamp?


A professional certification, much like the A+ certification that gives you a piece of paper to qualify you for entry work into IT. They're exclusively on Coursera for now, so it's more like Online learning at a University--I took a few classes online for my undergrad and it was kind of the same: online lectures, Q/A, forum participation points with periodic exams and a final. I'm told those bootcamps varies between pretty generic and cookie-cutter to a pretty intense mentorship-driven place, so I can't really say one or the other as all my tech stuff is self-taught and then supplemented with online courses and on the job projects, but my sector of fintech is incredibly niche so there isn't really any other option.

Give it a shot, you can survey most courses on Coursera (where Google is offering said certification) for free and pay if you want to stay and earn a certificate. Udemy is another one I used when I wanted to hone my python skills when I was going from my developer role at IBM.


One professor of mine let us bring books and notes into our written exams knowing that if you didn't actually do any of the readings or take any notes, those resources wouldn't really help you anyway.

But if you had adequately prepared, the books and notes could really help you craft an excellent written response.

In this scenario, it took the focus off memorizing things and instead focused on understanding the content and being able to communicate something worth reading.


And much closer to real life. Nobody will ding you on the job if you need to look up the implementation of an algorithm or data structure. Knowing which algorithm or data structure is appropriate is far more important.


And sadly so, this is still not the case for job interviews, where they expect you to memorize algorithms you normally search online or refer to your previous implementations.


In my engineering education my favorite and most productive learning came from the professor who assigned graded homework and provided fully worked solutions to the given problems before when the problems were assigned.

You had to do them, you were completely able to just copy the solution, and when you were working on something you got immediate feedback as to whether you were doing it well. If you wanted to learn you could, if you didn’t you wouldn’t. The lack of a week or two of lag between solving a problem and figuring out if you had done it correctly really made a difference.


I was a TA for the department head in CS, and he gave us free reign to handle grading and such however we felt was best. More important things to do, I guess :)

We had Homework, Quizzes, and Tests, 33.33 each. Homework was graded like you said. Quizzes and Tests were open notes but on a relatively tight timescale. If people missed a homework, they could turn it in late for some linearly scaled penalty. If they missed a quiz or test question, they could come in and demonstrate/defend a solution off the cuff during office hours for 70% credit or something similarly high.

Students loved it. They no longer had to stress about getting an A, and they could instead focus on understanding the material, since that was the easiest way to (eventually) get the points.

I will say, though, this strategy was very time consuming for a large class. It also requires the TAs to have mastery of the material, which (shocker) is unfortunately not all that common.


Through a number of classes I took online and in person with online homework, the immediate feedback was the one thing I really appreciated.

Statics is very much a class where the same basic concepts are applied tons of times in different ways, and you kind of need to see and do a lot of problems to get good. My professor had us do online homework with unlimited attempts, but we had to submit our hand written work as well. Even though that class was entirely online, I felt very confident applying what I learned on projects.

The immediate feedback was 10x better than having the week or two of lag, and 9x better than having the answer on the back of the book because I never ran out of problems to try.


> University systems have the major problem that a significant portion of students are only there for a degree, and their participation is playing the game in order to get that piece of paper, and the GPA number rating them.

In defense of the students, they're just following the incentive gradient in front of them.

For all the romantic notions about how college should be optimized for exploration and "real learning", only a small percentage of people actually want that. And that's fine! Not everyone needs to be that sort of person; you should be able to have a reasonably comfortable career even if you aren't.

But the world is set up right now such that, without a university degree, you fall through the cracks and it's game over. In lots of high-earning jobs, a degree is either de jure required (law, medicine), or de facto required as most major employers won't look at you without one (finance, most tech stuff). So universities become degree mills because that's what the market demands, and students are only there so they can get a job afterward.

To fix the universities, you have to fix the incentives.


Let’s not fail to mention the $100k elephant in the room :).

It’s very hard to focus on real learning when your financial freedom is at stake. People can not be faulted to feel that every minute spent studying the liberal arts curriculum is a minute that might be better invested in studying something that slays the elephant.

It’s a life or death feeling when the money involved is more than your parents have earned cumulatively.

Slaying the elephant becomes the end goal, everything else is secondary. A high GPA is a spear, and ethical integrity is a cheap way of obtaining it. This doesn’t explain why everyone cheats, but I think it attracts the otherwise “good” people, who are essential for normalizing the practice.

IMO, we need to do away with the elephant, and the rest will follow. Schools need not be so expensive. I hope some startup school will do the job. Otherwise, companies might be the first to do it.

It’s probably cheaper for Google to operate a University than it is to operate their recruiting pipeline :)


I know a lot of people on here don't like them but software bootcamps are very much the startup working at solving the job training problem. Sadly, it is an industry where caveat emptor certainly applies, but if you do your research there are definitely programs that are worth their price.

Check out CIRR.org which reports audited numbers from bootcamps. Good luck finding a university that reports comprehensive, audited, financial and career outcomes for their students. The numbers don't really exist, because it would be, guessing here, VERY embarassing for the universities.


I seem to remember being given placement and salary statistics when starting out in engineering.


I got statistics too, starting out in engineering. But based on the surveys I’ve participated in since graduating, I’m suspicious of their methodology...

Basically there’s no incentive for their studies to be scientific, and I don’t think they are. Their most-touted metric was “ROI”, which is super vague and highly dependent on in-field job placement, which was measured in a highly suspect way at my school.


Oh absolutely, little blame can be fairly put on students, though everyone participates in the system and deserves some (and eventually everyone responsible was a student themselves at one time).

> To fix the universities, you have to fix the incentives

Who better than the universities to drive change? That is sort of their fundamental purpose is it not?

If you need to lead a university around with a carrot, it really has failed to be much of a university.


If we needed any more proof of how badly our universities have and continue to fail in their educational mission, the current year has provided it. There's no reason college students need to be on campus in a time of pandemic. Many schools have opened in order to "support sports" and for other asinine reasons, and then closed almost immediately. [0] Educational institutions that actually value education have cancelled athletic seasons completely.

[0] https://www.nationalgeographic.com/science/2020/11/the-colle...


What's most insane to me with a degree system: Everyone who has the same 'degree' could have had wildly different classes. What's the point? Maybe we need to dial it back to more of a 'certificate' based approach? Instead of the letters-beside-your-name, list the completed certificates. And this also gives the opportunity to better upgrade skills in the specific problem area. And maybe it can encourage more interdisciplinary study.


I don’t agree with this approach, the biggest value of a university degree is teaching someone how to think and learn in combination with some domain knowledge. Unless you are going for higher degrees, the specialization isn’t of prime importance. i.e. what one engineer/scientist/scholar learns within their field in undergrad compared to another is really not so important for their ability to do things related to that degree. You don’t need to be exclusively prepared for your exact job.

We have accreditation boards to determine what is required for a common base.


Maybe I wasn't totally clear, but I think we actually agree.

To me, a focus on 'credit-based' system is about acknowledging that the boundaries of those domains are growing and changing. Gaining knowledge and expertise in one area can inform your work in another. The idea would be to broaden abilities, not focus on being 'exclusively prepared'.

> the biggest value of a university degree is teaching someone how to think and learn in combination with some domain knowledge.

I would add also: 'how to work', 'how to learn' and 'who to know'. By 'certs' I didn't just mean tech specializations. Courses can be about broader topics. A course on politics can be absolutely beneficial to an engineer. A more modular approach could also encourage lifelong learning rather than the "checkbox" that a degree satisfies.

>We have accreditation boards to determine what is required for a common base.

That's the problem. Those boards/requirements vary based on Universities and available instructors and resources. Yet, everyone comes out with the same degree.


Your last sentence nailed it: this is a society problem, not a university problem. It's insane how much emphasis is put on these measures by parents from a very young age.

School boards here, and I'm sure many other places, decided to nix grades from March onwards after the pandemic hit. The announcement wasn't made until closer to the end of the term. I remember some parents being furious claiming that their kids "wasted their time" on remote school if a grade wasn't being assigned.

I've had this discussion many times, and as a university drop out, my opinion has always been similar to yours. One thing I've learned is that debating the merits of the higher education system is right up there with religion and politics in terms of topics to avoid at a dinner party.

People take great pride in their educations (read: pieces of paper). They can take great offense to the institutions and systems that produced those pieces of paper being challenged. These values have been passed on and hardened through generations. This is the first thing that needs to change before there's meaningful reform, and it's going to take time.


My son is applying to universities now. Last spring he was taking 5 AP courses out of 8 required classes. The statewide decision to go to pass/fail cost him hugely in his weighted GPA, as he would have aced all 5 classes. Given that he attends a high school that is average at best, it really could have made a difference for him.

Fortunately, he navigated the mess that was AP testing last spring and his test scores prove that he knew the material but as a parent who watched my kid work so hard, even through the remote learning portion, it was quite stressful for both of us.

Now we wait to hear from the initial wave of applications. The system, as it stands, sucks but it is the system we have. Perhaps the great experiment starting with the class of 2021 may drive changes to make it better.


Fwiw I went to a no-name state school and found university to be invaluable. Classes, schoolwork, and exams gave me something to focus on and build the skills that I use routinely today. I believe my school was fairly balanced wrt the reasonableness of work expected and exams, but was overall a fairly typical education. I guess some people will benefit more than others from the enforced structure.


I didn't see anything in his comment that implied university wasn't valuable.


> At the self-described “heart” of the company’s monitoring software is Monitor AI, a “powerful artificial intelligence engine” that collects facial detection data [...] to identify “patterns and anomalies associated with cheating.”

> "... people who have some sort of facial disfigurement have special challenges; they might get flagged because their face has an unexpected geometry.”

So this company is extrapolating "patterns ... associated with cheating" from facial geometry. This is just phrenology laundered through their "powerful artificial intelligence engine" black box. Predicting behavior from the shape of someone's skull is still bullshit pseudoscience, even if the calipers[1] are replaced with a bunch of linear algebra.

[1] http://antiquescientifica.com/phrenology_calipers_George_Com...


> This is just phrenology ...

It's probably worse. I'd bet that in addition to humans with non-'normal' heads, it would also flag the blind, deaf, sufferers of Tourette syndrome, certain muslim women, bearded men, humans of african descent, humans of asian descent, etc.

> even if the calipers[1] are replaced with a bunch of linear algebra.

I love this line. If your head does not fit the space of eigen-heads, throw an error.



And the non-neurotypical.


It's clearly just looking for people looking at another computer or otherwise glancing elsewhere before entering answers. It's not phrenology. It may not be effective, either, and I don't want to be put in a position of defending this company. But it shouldn't be interpreted as they analyzing if someone looks like a cheater.


It's still not far off.

My dad had a pretty severe lazy eye, and I have no doubt eye tracking software would think that he was looking down at his desk constantly.

I stare off into the distance (usually sideways) when I'm thinking.

Those sorts of things are common and barely conscious.

So, what's left for the AI? Trying to detect if people seem nervous? Well, I've got friends who have nervous twitches under normal conditions.

What is normal behavior varies wildly person-to-person. Trying to figure out "is someone glancing somewhere before answering" vs "does someone nod to themselves when they think they got an answer in their head" doesn't sound like something AI can do.

Figuring out "is this person cheating" from a recording of their face sounds pretty close to phrenology. Determining personality from skull-size, determining cheating from facial-motion... they both sound bogus, and like someone's trying to create a correlation that cannot account for the variety of human behaviors and shapes.


I read the article. The specific cases mentioned involved missed instructions, interruptions where the test taker let their seat, or the test taker talking. This seems invasive, an invasion of privacy, and unreasonable. I object to the use of this software. But I really don't think they are doing or even being accused of anything like phrenology. I think it weakens the case against this type of software to make inaccurate accusations like that.


> It's clearly just looking for people looking at another computer or otherwise glancing elsewhere before entering answers

This assumes that the AI only picks up on patterns between eye movement and cheating, and not correlations between unrelated dimensions of data in the dataset that the model was trained on.

Famously, such correlations created AI systems that resulted in disparate impact on legally protected classes in the US.

Also, these systems are usually ill-equipped to handle anomalies in an accurate capacity.


I (and a lot of people I know) would do pretty badly on a test that flagged me for staring into space while I think.


The thing that has always frustrated me about anti-cheating/anti-plagiarism software is that it almost always only hurts people that did something by accident or unintentionally. When I took the one kubernetes certification exam as part of an old job, you used software like this and at one point I leaned too close to the screen and the proctor couldn't see my face and that got me flagged.

People that want to cheat or game the system will find ways, it isn't hard to do. In undergrad we setup a copy of the code checking software that our department used so that we could share code without it getting flagged as copies. I'm sure there are ways to game these systems too if you are motivated enough.

One of the issues is also just academia being so hyper-focused on thinking everyone is there to become an academic which is not the case for the vast majority of people. So these exams and the guidelines for student evaluation is grounded in that expectation instead of the reality.


My partner recently finished her PhD and has started working as a medical writer. During her academic career, it was beat into her head don’t plagiarize, everything must be yours.

In her first review after a few weeks on the job, her manager says that she takes too long to do her work; they just need her to take what the client says verbatim, fact check it, and then slap it in a document. She was treating her work like an academic assignment and putting in the effort to craft something unique, when they really just need a fact checking typist.

8 years in higher ed, published study on cancer drugs, thousands of mice died, millions of dollars spent on the lab... all so she can transcribe some text and then validate it against the studies.

Academia is the worst job training program ever.


Why would she take a job as a glorified typist after completing a Phd? Why is the company hiring a Phd to be a glorified typist?

Academia was never meant to be a jobs training program. The fact that many students treat it as such is not really the fault of the institutions.


Unfortunately each faculty member produces about 30 PhD students in their career (one per year). The number of faculty positions has been close to flat since the 1970's. So one in 30 PhD's will get to be faculty member. She probably took the job because she wants/needs a job and the company likes to have the status of PhDs doing the work.


Sure, so what? Anyone smart and dedicated enough to complete a Phd is smart enough to realize this going in. Surely the vast majority of people getting Phds don't expect to ever become academics and have some other plan.


> Anyone smart and dedicated enough to complete a Phd is smart enough to realize this going in.

No, this isn't true at all. People don't complete PhD's because they looked at their options and thought that one was the best. They do it because other people told them they should, and they just never thought about it.


Society treats university as a jobs training program. Observationally it seems like the universities (in Asia in particular, and the US to a lesser degree) encourage you to treat your diploma like a golden ticket to a good job.


> Academia is the worst job training program ever.

Pet peeve of mine. Academia is not a training program for jobs. The role of academia is not to produce business-perfect-candidates.

If business wants trained workers, they should train them. Cutting costs by not training them, then blaming universities for not producing trained workers is disingenuous at best.


Yes... and no. Yes, what you say is true - that isn't the point of an academic degree.

But no, because the way students (and parents) think about it is "go to college so you can get a good job". And many, many employers require a degree or they won't look at the candidate. In the real world, academia is functioning as a job training program.

Or at least as a gateway to the good jobs. But if it's going to be a gateway, but not do any training... that's pretty inefficient.


>> People that want to cheat or game the system will find ways, it isn't hard to do.

As someone who taught a college, that's a flawed statement. Let me propose this to you. You teach Calculus 1000 and there is an end of term exam. Your normal end of term exam is one where everyone sits in the same room, proctors are looking for cheats, things checked, etc. Instead this year you tell your students that this year it will be a take home exam with the following rules: 1) they have 2 hours to do it during the team home peroid; 2) no cheating by the honor system.

Do you think the rate of cheating will be the same? I mean by your logic, it should be.


I had my fair share of uncheatable open book exams, calculators allowed back in college. One I had on Calculus (or was it Linear Algebra?) was way harder than a comparable closed book exam. You really needed to know the subject to pass vs just memorizing a couple of formulas and plugging them in the right place.

Another professor devised an exam that used your student ID as a variable of the first question, and subsequent questions used the previous answers as inputs. Impossible to cheat.


> I had my fair share of uncheatable open book exams, calculators allowed back in college. One I had on Calculus (or was it Linear Algebra?) was way harder than a comparable closed book exam. You really needed to know the subject to pass vs just memorizing a couple of formulas and plugging them in the right place.

We're discussing this in relation in relation to COVID, so no large gathering. This means no open book in person exams, I'm specifically talking about take home exams.

> Another professor devised an exam that used your student ID as a variable of the first question, and subsequent questions used the previous answers as inputs. Impossible to cheat.

Really easy to do so in a take home exam.


If someone can figure out how to cheat on an exam where each answer depends on the previous, and the original seed is unique to each student, wouldn't it show a pretty thorough mastery of the subject matter?

I think the larger point is that it is fairly easy in any subject to design a test that is very hard to cheat on. It is much harder to find the resources in modern education to grade that test since each submission is likely unique.

Tests that are easy to grade (like multiple choice), tend to be tests that are easy to cheat....


How do you propose that this should work e.g. in proof-based maths courses? You can't just tweak a theorem to prove by the value of some "unique seed", the theorem might become wrong.

It's true that you usually can't cheat your way through such an exam provided you actually write the answer yourself, but in a take-home situation you can always ask someone else to solve it for you.


You do what my teachers do and have unique problem generation software.


what? how does that work?


You give it a few hundred problem classes, and constraints for possible answers, and it will randomize the class of problems and generate a unique problem as well as calculate the answer. Then you submit the answer as well as your work and it gets corrected.

For things such as proofs, it might give you a problem for which the theorem is needed, then ask you to solve the problem, indicate which theorem you used, and then prove the theorem.


Do you have any example of any software that can generate problems for actual proof-based courses (e.g. abstract algebra)? I'm having a really hard time imagining this, we can't even fully automate theorem proving - how are we supposed to automate theorem generation? And this ignores the fact that you also need to make sure that all the proofs are "of the same difficulty" in terms of fairness.


Ah, no, the theorems would have to be manually selected. But given a high enough number and an automatically generated context it makes cheating much harder.


The problem is COVID. You can’t have a large gathering. So everyone is doing their exam at home.

Please tell me how you would structure such an exam without a proctoring system as described.


One option, if you have a reasonably low student-to-instructor ratio: make it an oral 1-1 exam for each student. A video call with the instructor and the student; you ask questions, they answer. If you have 20-ish students, it will eat up half a work-week or so, which is more work than grading 20 exam papers, but not _that_ much more.

Of course if you have 50 students per instructor this is not going to work...


It depends on the subject. I studied computer science and econ.

Computer science: Solve a complex problem in code. Include a git history. Be ready to defend your program design if I get suspicious.

Econ: Long answer question: Pick 5 concepts that we learned about that you think are most important. Explain them as you would to a ten year old.


> had my fair share of uncheatable open book exams, calculators allowed back in college. One I had on Calculus (or was it Linear Algebra?) was way harder than a comparable closed book exam. You really needed to know the subject to pass vs just memorizing a couple of formulas and plugging them in the right place.

You can always just pay someone to do it for you.


The idea that an honor code is enough is belied by the fact that software like this catches cheaters, no?

I mean, we're obviously upset about false positives, and we should be. But I'm presuming that some people are caught who were cheating, and without the software they would have cheated and not been caught.

We can suggest that with an honor code in place, maybe some of those cheating students would have not cheated, because... I mean, if they were willing to cheat with software in place, I'm not sure why would have been deterred by an honor code.

I think in about 98-99% of cases, people who claim an honor code prevents cheating are deluding themselves. Yeah, if you don't have any way of catching cheaters, then you can pretend you have a 0% cheating rate. But it's pretend.

P.S. I'm not speaking for my employer at all.


It's not certain that the software catches cheaters. It could be security theater. Even the false positives could just be to make people nervous.

Where would they get reliable training data?

There's a pile of money to be spent on this stuff, and virtually zero accountability. What's that a recipe for?


I have taught at multiple honor-code institutions (and still do). It does not prevent cheating. However, it shifts focus: I can go about my teaching life starting from an assumption that students are not cheaters—and I'm personally convinced that most aren't.

The flipside is that when you do catch a cheating case, you completely throw the book at them. It's legitimately easier to cheat under an honor system, if that's what you're wanting to do... so my assumption is that if we catch you at it, it's likely part of a pattern, and if we catch you multiple times the pattern is irreformable. It is not uncommon at honor-code institutions to expel students on the second offence (sometimes even on the first).

I do think that cheating is less prevalent at my institution (and my previous institution) than it is in the larger university population.


I perhaps wasn't clear enough but I meant solely when it comes to these type of anti-cheat systems being used. Obviously it would be different for in-person vs remote/take home.


I don't see why we should allow for the possibility that the ratio of cheaters would stay the same with or without the surveillance. That could only be true if the ratio of potential cheaters were so low that the threat of detection introduced by the spyware can't reduce the ratio further.

We may hate the software on ethical grounds, or because it degrades the exam-experience on many levels, or because its use can be considered abusive, but obviously it has an effect in the intended direction.


> One of the issues is also just academia being so hyper-focused on thinking everyone is there to become an academic which is not the case for the vast majority of people

I'm not sure I follow why this is bad. I think academic rigor and being a decent scholar, as well as being able to parse and produce research are good things (and in my mind, those are the corner stones of being an academic). Did misunderstand you?

FWIW, in germany (and likely other european countries) we have a two tier system for higher education, consisting of universities and "applied universities", with the latter focusing on applied skills and the former focusing on research, which in think is sensible.


I don't think the problem is that academic rigor isn't good. I also should have stated I was talking more about US universities in this particular case.

The problem I see is that what you talk about as academic rigor isn't what is taught and evaluated in many of the programs and classes that I've seen or been a part of. A lot of these exams and assessments don't particularly evaluate you on your ability to research and understand knowledge. If I know for example that my physics professor uses a bank of questions then it is much more incentivized for me to memorize that bank of questions vs. understanding the content and working the problems myself. Whereas in say a Discrete Math or Algorithm based class, the final exam/grade is based on a proof you have to write yourself, that encourages (or rather at times forces) you to learn and research like you said.

I also think the issue, and this may just be me looking at from my own experience, a lot of people don't want to be scholars, as you put it. They went to a University based on the unfortunate expectation for some jobs that say you need that diploma as your entry ticket.


in germany (and likely other european countries) we have a two tier system for higher education, consisting of universities and "applied universities", with the latter focusing on applied skills and the former focusing on research

While it isn't codified, we effectively have this in the US as well.

Most of the "brand name" universities, plus the flagship state universities, conduct research and grant various doctoral degrees.

Then we have the middle-tier colleges (state and private) that grant masters (often only professional degrees like nursing, MBAs, etc).

And thousands of Baccalaureate-only and 2-year community colleges.

Also, in the US, "university" generally indicates a post-graduate degree granting institution. And "college" usually refers to a 2-year and Baccalaureate-only school. But, also not codified and there are notable exceptions (ex: The College of William & Mary is a top-notch full university who's name pre-dates the convention).


In the US, "college" means narrow subject matter, and "university" means a wide variety of colleges all together on one campus.

For example, there may be a "College of Engineering" and a "College of Arts and Science" that are part of one university.

It's possible to have a stand-alone college that isn't in a university. A good example is Berklee College of Music, which is narrowly focused on music.


I’ve never seen “narrow focus” as a definition for college. I’ve always seen it used as US News uses it.

https://www.usnews.com/education/best-colleges/articles/2018...

But, you are correct that subject schools within a larger uni are often called College of Such and Such.


Using this software sounds idiotic. They're trying to apply methods to in-person learning to distance learning. Instead of focusing on detecting cheating, they should be re-evaluating their grading and teaching methodology so that cheating is unproductive. Prioritize in class participation and essay writing. If tests are needed, they can be timed and open book.

Great example of why our educational system isn't all that good.

I took a year of Japanese in college. A big part of the grade was memorizing and performing these dialogs. If you didn't remember the dialog word for word, you'd get points off. I wasted a lot of time trying to memorize those stupid things when I could have been acquiring new vocabulary words or studying Kanji. I took a trip to Japan a couple of years go, and did some brushing up for a couple of months before I left. I learned more in those two months than I did in my entire time at school.


Running essays through searches for similarity is one thing, this de-facto remote polygraph is quite another.

Polygraphs are bullshit and basically select for submissive behavior, which I suppose is what these institutions are looking to reward, but it is conspicuous that nobody called these surveillance schemes what they truly are: degrading.


I have never once seen this software meaningfully work in any reasonable manner other than be a massive privacy violation and be a massive waste of time for the GSIs being forced to go through a timestamped log of every time a student blinked.

How about writing open-book tests in ways that are impossible to cheat on if you don't understand the material? You've had almost a year to adapt.

Why outsource student PII to a developing country sweatshop? This all seems absurd to me, it's been much easier to just write a one-liner inline script to hook blur, focus, visibilitychange, and onkeydown and log the userid when the event happens.


> This all seems absurd to me, it's been much easier to just write a one-liner inline script to hook blur, focus, visibilitychange, and onkeydown and log the userid when the event happens.

If you're curious, this is exactly what Canvas does to detect foul play (although they don't advertise it for that, it just goes into the log)[0][1]. Schools don't think it's good enough, so they spend thousands on these more invasive solutions.

[0]: https://community.canvaslms.com/t5/Instructor-Guide/How-do-I... [1]: https://github.com/instructure/canvas-lms/blob/master/public...


>> How about writing open-book tests in ways that are impossible to cheat on if you don't understand the material? You've had almost a year to adapt.

So in a remote test situation, explain how this is possible without use of these software. I can think of many scenarios where a second person can be in the room and doing the exam.


What stopped people from having someone else do their take home tests before the pandemic?

On a more constructive note, it seems pretty simple to require some portion of responses to reference the lectures such that only someone who attended the class in this semester would be able to correctly answer, for example "use the method we discussed in the first half of lecture 3, use the third of the four numbers I told you to write down that day as variable y, if you weren't in class that day instead do X" If a test comes back and someone claims to not remember the material from any classes but still got everything right, that warrants scrutiny. If someone currently enrolled in the class is helping a person cheat, you can use standard anti-cheating techniques like comparing answers.


> require some portion of responses to reference the lectures such that only someone who attended the class in this semester would be able to correctly answer, for example "use the method we discussed in the first half of lecture 3, use the third of the four numbers I told you to write down that day as variable y, if you weren't in class that day instead do X"

Then you are measuring class attendance rather than subject matter mastery. If someone has already mastered the year’s material by the third class, why penalize them for skipping the rest of the lectures?


They aren't penalized, they can still do the X alternative that relies only on understanding the material. This is merely a method for determining who to take a closer look at, not what that closer look will reveal.


>> What stopped people from having someone else do their take home tests before the pandemic?

Nothing except these tests were not take home prior to COVID.

>> On a more constructive note, it seems pretty simple to require some portion of responses to reference the lectures such that only someone who attended the class in this semester would be able to correctly answer, for example "use the method we discussed in the first half of lecture 3, use the third of the four numbers I told you to write down that day as variable y, if you weren't in class that day instead do X" If a test comes back and someone claims to not remember the material from any classes but still got everything right, that warrants scrutiny. If someone currently enrolled in the class is helping a person cheat, you can use standard anti-cheating techniques like comparing answers.

So you are expecting the student to remember 100% of what they heard in the online classes? Or more likely the student would write down these details and handle it to the cheater to use during the take home exam. Or the cheater would "do X". In most cases the one doing the helping is not a current student, so it doesn't help.

I've passed college classes not attending any classes. Doing assignments, exams, and mid-terms. So would I be penalized for not attending classes?


> Nothing except these tests were not take home prior to COVID.

Take home tests were definitely a thing before covid.

> So you are expecting the student to remember 100% of what they heard in the online classes? Or more likely the student would write down these details and handle it to the cheater to use during the take home exam. Or the cheater would "do X". In most cases the one doing the helping is not a current student, so it doesn't help.

I expect students to take notes for their open book exams. If you're competent enough to identify all the material needed for the cheater to do an open book test, congratulations you have the knowledge to pass the test. If you just record every piece of information possible, congratulations you have spent way more time and effort than it would have taken to just learn the material.

If you don't attend any classes, do X. You're not being penalized for doing that, it just warrants scrutiny. If this is an entry level course that an intelligent person could teach themselves the material, then it probably doesn't matter if you cheated or not, you'll be found out in higher level courses. If this is a high level course, your department would probably have a good idea of how capable you are from past performance. If the guy struggling to get a C in introductory physics gets a perfect score on his quantum final without going to class once, that is super suspicious.


> Take home tests were definitely a thing before covid.

Yes, and they work pretty well for some subjects. But not for others.

> ... that is super suspicious.

But suspicion is not enough to take disciplinary measures.

In many cases, there really is no solution that is privacy-preserving, anti-cheating, covid-safe and affordable.


I can also think of many scenarios where group project-based assessments can be completely done by someone else. I've seen people pay others to physically attend as them, pre-COVID, literally hand over their student ID to be physically present and take a test. You can't really stop all of them.


That's no reason to lower the bar even further.


That’s a curious way to frame the problem. The harm from these anti-cheating measures is evident.


The one I replied to gives as reason: you can't stop them all. Let's just give everyone a MSc, shall we?


> Let's just give everyone a MSc, shall we?

Are you claiming that having open-book tests makes it so that everyone can get an MSc? That’s what it sounds like you’re saying, so I assume that I just don’t know what you’re arguing.

Every system is going to catch some percentage of cheaters and wrongly punish some percentage of innocent people. The pandemic has put us in a bad position where we can’t use some of the more effective systems (in-class tests) for assessing knowledge / preventing cheating, so we are forced to come up with some kind of compromise, and in many subjects, open-book, take-home tests work very well (although they require more work from the professors).


The professors are overworked already. So that's one reason not to. Second, cheating is also fairly easy on open questions. Just get someone to prompt you the answers. That's what the proctoring software is for.

But it's the style of justification: because you can't catch them all, just ignore the problem. That's not ok. Education is supposed to teach you something else than cheating.


And yet I had many open book tests/assignments decades ago before there were even personal computers. Technology doesn't solve everything and some things aren't worth trying to solve 100%.


At the same time it doesn't mean we should not try to solve them to say 90%.


I'm a professor in the humanities so I can write tests that make it harder to cheat than some other fields. I cringe whenever I hear someone using these invasive kinds of software. There has to be a better way than making students install spyware.


Sure. Have at most ten to twenty students per teacher. Then the teacher can spend a couple of hours per student to assess their understanding. Unfortunately, in education quantity seems to be prioritized over quality. When you have over 50 students, let alone over 200 hundred, good luck grading authentic open-ended projects or papers in just two weeks.

As a result, most tests consists of multiple-choice questions and standard-format open questions. It is unclear to what extent they test students' understanding, and students do become quite proficient test takers, but at least theses tests can be executed given the current constraints of available staff, acceptable rigor, and student expectations.

Unfortunately, you cannot move these type of test to an online setting and expect students not to cheat at all. It is too easy to talk to classmates, look at the study materials, or even on the Internet for some hints to answers or even the actual answers. The only option many institutions saw was to move to a draconian proctoring solution because they just lack the means to roll-out anything else given the constraints they have to work with.

Ideally, they would re-evaluate their choices regarding quality versus quantity, but because they need the large number of students to stay afloat, nothing meaningful will happen.


And that would somehow be bad? Maybe the sooner current system explodes, the better? After Thatcher, practical schools were closed and there has been pressure to make as many people earn a university degree as possible. Universities are lowering standards and churning out people that have no business studying at a university. Most people go to a university because that's what society, and employers, expect.

How many students do you think Plato, Aristotle, Jesus etc. had? Teach the brightest and the ones with genuine interest in the subject (I don't use the word "passion" because after adoption by corporations it no longer seems to mean anything). The rest will be fine with a high school/secondary school education. Most people retain only a fraction of that knowledge anyway.

Maybe it's time to stop treating (higher) education like an assembly line?


> Then the teacher can spend a couple of hours per student to assess their understanding.

That sounds as if you're steering close to a whiteboarding interview which a lot of people will claim isn't great.

Take home tests aren't perfect but I had plenty of them in school. And even when I didn't I don't think I had just about any engineering class that didn't assign problem sets that counted for a decent portion of your grade. If you can do take-home problem sets, you can do a take-home exam.


Taking home exams or assignments are great, but they need to be assessed as well. Here too, the number of students is a limiting factor. With too many students, the best a teacher can do is summative assessment and generate a reasonable grade for the student. Personally, I think formative assessment is far superior to summative assessment, but formative assessment takes a lot of effort from the teacher. Often more than is allotted to her for assessement (or the whole course!).

I was thinking more about a larger project or paper with quite a lot of freedom for the students that would take the teacher hours to get familiar with and give constructive formative feedback on. As part of that, an oral section to discuss the project and paper with the student would be great as well. In assessment, I would like the teacher and student to be partners rather than opponents.

If a student understands the material, together they should be able to have a conversation where the student can reflect on her understanding and learning to the extent where the teacher can determine that the student has passed the course. Of course, if there is an intense interaction between the student and teacher throughout the course, the whole idea of a final assessment becomes pretty meaningless.

I know, my ideas are a bit far-fetched and there are a lot of practicalities that are difficult to work out. But one can dream!


> That sounds as if you're steering close to a whiteboarding interview which a lot of people will claim isn't great.

One major difference is that unlike a whiteboarding interview there are presumably clear expectations as to what one is expected to have learned in the class and therefore what topics the discussion will cover, and at what depth.

Another issue with whiteboarding interviews, specific to computer-related subjects, is being asked to write code on the whiteboard, which is a severely unnatural act in all sorts of ways compared to the way one normally writes code. But an in-person assessment in a proofs-based math class, for example, would not have that problem. Similar for a CS (as opposed to software engineering) class.

And in a software engineering class, the concept of "test" is pretty odd, for the same reasons that whiteboarding interviews are; I'd expect longer-term projects to be closer to the right evaluation tool.


I could see notebooks being a useful way to both handout homework and projects as well as automatically check for correctness.

A merge/code review tool could be used to annotate individual problems. Have you seen this done?


We have collaborative notebooks on iko.ai[0] which makes it easy either to pair program/troubleshoot on the same notebook, or for a teacher or instructor to review work.

We haven't added autograding yet as it is specific, but nbgrader[1][2] comes to mind.

- [0]: https://iko.ai/docs/notebook/#collaboration

- [1]: https://nbgrader.readthedocs.io

- [2]: https://nbgrader.readthedocs.io/en/stable/command_line_tools...


I haven't used any essay-checking or exam-taking software yet.

Even so, based on what I've heard from my immediate superior, I'm flagging somewhere between 10-20% of all of the plagiarism and cheating cases in my faculty.

It's not hard to detect (generally good writing mixed with bad on essays / the repetition of similar phrases by different students on exams), but it does take a bit of time to confirm and you need to write a some policy-consistent boilerplate to make it simple to take action.

Some here have suggested that we need to write the equivalent of "open book" exams for the online environment. I'm not sure that would be beneficial. It's still important to exercise your mental faculties. People who can recall information and reason for themselves are still more valuable than those who can cut and paste. Or to put it another way, you need to understand the math in order to wield a calculator well.


In a lot of disciplines, open book doesn’t help you solve problems all that much.

It might supply some of the pieces required to solve problems, but the ability to remember all of those pieces by rote vs. looking them up is really orthogonal to the ability to apply them to solving a novel problem. Indeed the ability to find the requisite information to solve a problem in a timely manner is an important skill in itself and more accurately represents the real world problem solving one is preparing for.


How many students do you have and what's your teaching load? I think that's another primary consideration. Teaching a 3/3 (or god forbid, a 4/4) with 30 students in each makes for a rough grading experience.


> I can write tests that make it harder to cheat than some other fields

This might be my prejudice speaking (I studied math) but isn't this basically because grading is that much more subjective? So it would cut both ways, harder to cheat but also less objective / impartial assuming no cheating.


My university does a lot of oral math exams. After failing 3 times you have the option for an oral test as a last chance and people do indeed fail math in large numbers.

I was on the other side as an academic assistant for a few days (no role in testing) and I can tell you that oral math exams about algebra can be as hard as you want them to be, especially if you have problems with tests and get nervous. You have to take drugs against that.

The prof I had was a bit of a dick but he was really good at measuring how well you understood the concepts and if you ware able to apply them to problems.


Not necessarily. I do a lot of questions that are here is a situation. What concept does this illustrate? Now sure, they can look up the definition of the concepts but they will still need to be able to identify how the concept relates to the situation.


I wonder if all disciplines have this problem.

Most of my engineering higher level classes were open everything not electronic. If you don't know how to tackle the problem, you're not going to finish on time.


I remember some math classes that did it well: send students home with original assignments to do proofs. I guess it's theoretically open to having someone else do the assignment, but so are lots of other things. Because I could just sit and attack the problems like problems, without massive time pressure and my every action monitored, I enjoyed those assignments and exams.

Similarly, in CS classes, I always enjoyed assignments where I got to go write a program to do something. Again, it could be cheated, but so could lots of other things.

The fact that some students cheat shouldn't ruin classes for everyone else. Since cheaters do lower the value of a degree, maybe universities should start taking legal action against them instead. Provide a huge disincentive without messing everything up for the majority of students.


“Original assignments” as in everybody has a different assignment? If so, dang that sounds like a huge time investment to create and grade for the prof and TAs.

WRT legal consequences for cheating: I mean, you can get kicked out of university. That’s a loss of thousands of dollars invested. And as a TA, if I knew I could be called in to testify in court every time I caught a student cheating... I’d never call it out. I don’t have time for that! And that still wouldn’t change that we’d have to be on the lookout for cheating.


This is honestly the better solution. In my upper division physics classes most exams were take home.


From the many students I know, I can tell you that cheating is rampant in online learning. The average GPA has gone up 0.3 points at my alma mater. I don't think people understand just how widespread it is.

This software isn't ideal but there's no good solutions here short of major rethink of how these classes, and possible all of the college, is structured.


Certainly I had many take-home exams in college and grad school. (Of course, I had many proctored in-person exams too.)

So one solution is to switch to open book take home exams as much as possible. The other frankly is to have an honor code and, at some point, recognize that some people will cheat but they're mostly hurting themselves.


Scholarships, internships and grad school spots are a scarce resource.

Students who cheat can secure a higher GPA with less effort. That effort can be redirected towards looking for internships and improving grad school applications.

So I fail to see how cheaters are only “hurting themselves”. Morality aside, if you pursue a cheating strategy your pay-off will be much higher when we consider the scarcity of resources available to students.


> Students who cheat can secure a higher GPA with less effort. That effort can be redirected towards looking for internships and improving grad school applications.

This is definitely not the reason why most people cheat.

> Morality aside, if you pursue a cheating strategy your pay-off will be much higher

As with any kind of lie, you will have to cheat more and more to compensate for previous cheating, and eventually your little web of lies will come back and bite you in the ass.


The first part certainly is why they cheat; higher grades allow easier admissions to jobs and better schools.

> As with any kind of lie, you will have to cheat more and more to compensate for previous cheating, and eventually your little web of lies will come back and bite you in the ass.

Completely disagree. People cheat to climb the ladder of life without having to put in the work or having the resources required to do bypass that ladder. However, once you climb the ladder high enough, it is much harder to fall back down.

Sure you're not going to be a good doctor if you cheat your way through med school but if you cheat your way through undergrad and get into a better program with better resources, more driven peers and better professors, you are better off regardless.


At least the cheaters I know cheat because a) they’re unable to pass without cheating and/or b) they’re avoiding putting in effort. The jobs are part of the goal, but that’s true for basically everyone that attenda college. I don’t know anyone who cheats and invests all the saved time into grinding Leetcode.

> if you cheat your way through undergrad and get into a better program with better resources, more driven peers and better professors, you are better off regardless.

Until you end up doing a surgery. You can’t cheat the real world. Your lack of knowledge will show eventually, and then it’ll limit your opportunities.


That's why I wrote "mostly." Some reasonable measures to deter casual cheating may well be worthwhile. At some point though you draw a line where you start hurting honest students more than cheaters. You're not going to prevent it 100% absent draconian measures.


> recognize that some people will cheat but they're mostly hurting themselves.

This assumes that what you learn in university is the useful part, not the grade or the allocation of time that could be used for hackathons, projects, clubs, or internships.


Make exams that are harder to cheat. For example:

- oral exams via video call

- written exams where students are distributed over a larger area (e.g. the university rents a warehouse for the examination time) so that the COVID-19 spreading risk is nevertheless kept very small.


This has been done in my last exam phase and has worked well. They cranked the AC to 11, which made it quite an unpleasant envirommemt, but I would rather wear a jacket than take online exams


I can remember taking many exams in the athletic field house at my university back in the 1990s. It was pretty standard when you were in a large class.

The desks were so far apart that the current "social distancing" standards would be met. I remember that many courses had multiple variations of the same exam given to students to further reduce cheating.

I'm taking an online masters right now and have taken a few (pre-pandemic) proctored exams. The nearby university offers to proctor any exam for $10. They have a bunch of rooms with desks, and it's no big deal at all.

This proctoring technology thing has gone too far and too fast. It was a knee-jerk reaction, and with a lot of complaints I bet that a lot of it gets dropped.


Let's roll this back and ask a different question: Are the students learning the material in a way that they can display mastery outside of an exam setting / on the job?

If so, who cares if cheating numbers / grades have been inflating?


At some point, when GPA is used exclusively to filter-out and rank students instead of as one of the many factors in an application, you get that metrics gamification.


Cheating is rampant in in-person learning as well. We shouldn't be treating online learning the same as in-person learning. I took distance learning classes back in the early 00's. Some courses required proctored exams, some had open book exams. For proctored exams, I would do them in the library, then the librarian would send the test back to the school. The open book exams weren't really any easier - you were expected to know the material and therefore the tests were more challenging.

In my professional experience, GPA has little correlation with how well someone can do a job. GPA going up or down by 0.3 points really doesn't mean that much to me.


At least where I'm from (Norway), grade transcripts will show a class grade distribution/histogram in the background of your grade.

That way the viewer / reader will be able to compare your results to your class.

Obviously if 80% of your class got A, that A is not going to look as impressive.

On the other hand, if you're the only student that got A, while the class distribution is monotonically increasing towards Failure, that's going to be a good thing for you.


When I went to college, our department gave two grades on the transcript: percentage mark and class ranking. The ranking is your ranking among all the sections of that class taught that semester.


For those of us who went to University of Virginia, this seems truly weird. I was allowed to take my exam anywhere I wanted. In physics, a lot of us would just head up to the much more comfortable tables in the physics library to take our exams. If someone walked in wanting to talk, we would say, "Pledge" (that is, we were under pledge not to cheat on the assignment) and they would say "OK" and leave.

And then there were the take home, open book, open notes, you have three days to finish this exams...


This is, IMO, just lazy professors that didn't bother to make exams for the digital space. Instead, they invested money and resources in digital proctors - with all the invasive pains that come with it.

If you can't bother to make a proper exam, don't bother. Rather make the classes pass/fail on project work, or grade the classes on projects / home exams. Trying to bring a 100% replica of the physical exam space into the digital space is just a recipe for disaster.


While some of it may be lazy professors, it may also be that professors don't know how to create exams in this new world, or they're not being given the appropriate resources.

What do you think it would take to bring educators up to this new standard that is very different?


I found myself brewing with rage while reading this article. The approach of closely inspecting a student's words & physical actions while taking a test just feels...dated. Ancient. Anachronistic. Embarrassing.

In the real world of 2020+, almost everyone has a portable Hitchhiker's Guide to the Galaxy in their pocket. Let people use calculators. Heck, let people use Wikipedia! If they copy an incorrect fact from Wikipedia, well, that's their problem and penalize them for that.

Education needs to be considered in the context of the world we live in. If you want to test students, then develop ways of TESTING students - move to live oral exams. Written long-form essays. Develop education techniques that USE the tools we're blessed to have around us instead of fearing them.

I don't have all the answers as I haven't thought about this deeply enough. But I've got to imagine there are ways of testing learning & knowledge that aren't based on fact memorization & regurgitation or performing calculations in your head that can easily be done on a calculator.


All good points. If these types of monitors are being used, then clearly the tests are flawed.

The hardest test I ever took was the AI final at CMU. Open book. Books didn't help :(


> move to live oral exams. Written long-form essays.

Yes. The application of principles, concepts and information is not terribly difficult to test in either a timed or take home setting.


Live oral exams can be an effective method of assessment. They can also be confounded by the testee's charisma. (See: any discussion of programming interviews, which are live oral exams.[1])

They do have the advantage of degrading fairly gracefully from in-person examination to remote examination.

They're also tremendously expensive compared to a test given en masse.

[1] Oral exams in a school setting would feel much fairer to people than programming interviews do, mainly because you know what to expect on an exam in a class you just took.


My Engineering Thesis had an oral component it was legitimately of the most nerve-wracking things I ever went through. I had to spend about half an hour in a room with 3 professors getting absolutely grilled over everything from basic theoretical knowledge to detailed technical nuances specific to my experiment. At times it felt like they were deliberately trying to trip me up by asking leading questions and trying to steer me in the wrong direction.

It would be very, very hard to bluff your way through something like this with Charisma alone. You have to be damn sure you know your topic in and out. For once off exams before graduation I think it is acceptable but I'd pity students who had to go through a grilling like this every couple of months.


> They can also be confounded by the testee's charisma.

Agreed, but I don't think oral exams are the only type of effective live testing. Nor are live/timed exercises the only way to demonstrate knowledge.

Case studies, short/long form writing and creative exercises are all viable options. The primary concern seems regarding scaling grading and review, which with a little effort and ingenuity is solvable.


There are, they just cost a heck of a lot more. You can no longer have a course taught for a $3000 adjunct.


Folks are doing take-home exam wrong.

Caltech had tremendous success with a culture of honesty and take-home exams.


Caltech is also smaller and extremely hard to get into, hardly representative of avg college student.


Moreover, they select students that want to learn, and value honesty.


Most students go to college to get a piece of paper.

They couldn't care less whether they get it by cheating or by honest work.

The piece of paper is all that matters.


That means they are admitting too much.


I think this illustrates the problems I have with these ML systems. Technology is supposed to make our lives easier. While surveillance may help catch more "bad guys" (clearly something we as a society argue over the definition of) it also causes many a lot of anxiety. One has to question if the added incrimination is worth the cost of anxiety to the public. Personally I do not think this is a good trade because even long before all this surveillance technology I felt safe in society. I believe most people did even in the 50's and 60's. So has the added surveillance really helped decrease crime? Or is it just correlated? (I'm not even convinced it is correlated).

As for tests, there is an easier solution to all this. Write your test as if it was a take home. Open notes. You can't stop people from communicating but often these types of exams/assignments it becomes clear who is doing it. In my undergrad all my upper division classes' exams were take home because "I can't test you on anything worthwhile in 2 hours." Honestly, most of us enjoyed these more. We often did the exams in the same room (we had the back of a building that was dedicated as a lounge to the physics students) and no one really cheated. The closest was "hey, I'm stuck on this problem and I know you are finished. Can I just use you as a rubber ducky?" (more like just explaining the problem to the person and the other person saying "uh huh" and no more) It also made me feel like an adult because our professors trusted us. As someone that frequently does poorly in a testing environment I was also surprised that I was able to get much higher scores on these tests despite the added complexity. The simple fact that I could "take a break and come back" was all the piece of mind I needed (or grabbing a beer when I felt frustrated). This also better reflected, in my opinion, what solving difficult problems were like in the real world. I could grab my book, go to the page that I know is helpful, sit and think, take of my shoes, pace, whatever. I was treated like an adult and it felt good.


IMO, the problem isn't the software. It's how humans are using the software.

The software itself shouldn't have any control over the student's grades. A person should have to review the flags and actually find some wrongdoing. Not just push 'yes' and walk away.


This is an echo of the “guns don’t kill people, people kill people” argument that has been going around for ages.

Let’s say you’re only using the software to flag suspicious behavior, and bringing in humans to make the final decision. What happens when (inevitably) the software disproportionally flags people with dark skin because it is not trained to recognize dark-skinned faces? Or when the software disproportionally flags poor people, or people with families?

It means that those groups of people will be targeted by the (human) bureaucracy and tasked with defending themselves, when they’ve done nothing wrong. Humans will inevitably trust the algorithm, they will use the algorithm’s outputs to justify their own biases, and even investigations come with a cost.

There’s this meme going around that the “algorithm isn’t biased, it’s the data”, but that argument doesn’t really hold water—machine learning systems, by default, learn to recognize correlations, and correlations in the real world collected with real sensors contain biases. ML, by its nature, picks up and encodes those biases, and you must make an effort to remove them—you can’t just throw an ML algorithm at a pile of data.


I don't think you meant to do this but you seem to have inadvertently made a very damn good argument for "guns don't kill people people kill people.

People are getting screwed because they are at the long end of an unbroken chain of crap. Crappy organizations buy crappy software and crappy professors take the results seriously. The fact that there exists a crappy tool that flags all the black people as cheaters (or whatever, point is that the false positives are unacceptable common and unacceptably distributed).

Blaming the gun (the software in this case) is tacitly condoning the unbroken chain of half a dozen people/entities that are failing to do the job they are being paid to do. The software developers shouldn't be building crap software. The companies shouldn't be selling crap software. The universities shouldn't be buying crap software. The professors shouldn't be using the results of crap software. To look at that situation and say "yeah the problem here is that this crap software exists" is beyond naive. The problem is that nobody is being accountable for the bad outcome. I'm not asking for a whipping boy or a scapegoat here, the problem is that when nobody can be held fully responsible it seems like nobody even gets held partly responsible.


This argument is fallacious. Your argument assumes that “blaming the gun” is condoning the users, and this is a false dilemma.

There’s really no room for absolutism, where blame is assigned to one source rather than distributed among many contributing factors. Imagine how dangerous air travel would still be if, after an accident, investigators looked for only one cause to blame, and tacitly condoned anything else they came across in their investigations.


>This argument is fallacious.

Re-read my comment. My point is that fault is distributed sufficiently that accountability seems to evaporate. This is a systemic or organizational problem. Anti-cheating software is just the form this specific instance has taken.

>Your argument assumes that “blaming the gun” is condoning the users

Well until now you weren't throwing even the slightest bit of shade in their direction.

>and this is a false dilemma.

And you've created a false middle ground.

>Imagine how dangerous air travel would still be if, after an accident, investigators looked for only one cause to blame, and tacitly condoned anything else they came across in their investigations.

What's the difference between "dangerous harmful cheating software" and "cheating software that's being shoehorned into use cases in which it was never expected to be used"?

That's why you don't blame the (metaphorical) gun. Everything is just tools.

The FAA doesn't go off half cocked about the evils of grade-2 fasteners because once upon a time an engineer thought a grade-2 would be enough when he should have used a grade-5. I can't believe I have to defend (invasive to the point of being unethical) anti cheating software but these sorts of software tools are just tools and can be used either wisely or poorly. The software doesn't know or care how it's being used. In an industrial setting they can (and are, same underlying tech different companies) be used to design more effective interfaces to reduce errors (which I think we would all agree is a net positive contribution to the world).


> Re-read my comment.

Stopped reading at that point, cheers.


I think this misses the parent’s point.

The point is to not let the algorithm make decisions. The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible. Unless algorithms can be trialed and held accountable, they shouldn’t be allowed to make decisions.

Also guns don’t kill people, people do. Otherwise explain to me why it would be okay for certain institutions to be armed but not individuals. If guns are the problem, then no one should have them (including the military/police).


> I think this misses the parent’s point.

Just because you disagree with me, it doesn’t mean that I misunderstood the viewpoint I’m responding to.

> The point is to not let the algorithm make decisions.

And my response is—that’s not enough. It sounds like the algorithm, because it is biased, has the effect of increasing the bias in the whole system. If your response is that humans should work harder to counteract biases in machine systems, well, I think that’s just a way to CYA and assign blame but not a way to solve the problem—humans will remain biased, and they will trust automated systems even when that trust is misplaced.

As an analogy, it’s like a driver in a partially autonomous car. As soon as the automation takes over, the driver stops paying attention to the road. We can make a big fuss and production and talk about how it’s the driver’s fault, and the driver should pay attention, but we’ve placed them in a system where they are discouraged from paying attention, and the system is more dangerous as a consequence.

> Also guns don’t kill people, people do. Otherwise explain to me why it would be okay for certain institutions to be armed but not individuals. If guns are the problem, then no one should have them (including the military/police).

This is a false dilemma / false dichotomy. This argument assumes that EITHER access to guns is to blame OR people are to blame, but not both, but there are obviously other ways to think about the problem.

Any rational way to look at problems will look at multiple contributing factors.


I do think you missed the point.

Parent: The software itself shouldn't have any control over the student's grades. A person should have to review the flags and actually find some wrongdoing. Not just push 'yes' and walk away.

You: Let’s say you’re only using the software to flag suspicious behavior, and bringing in humans to make the final decision. What happens when (inevitably) the software disproportionally flags people with dark skin because it is not trained to recognize dark-skinned faces? Or when the software disproportionally flags poor people, or people with families?

Answer: A person should have to review the flags and actually find some wrongdoing.

You: It means that those groups of people will be targeted by the (human) bureaucracy and tasked with defending themselves, when they’ve done nothing wrong

Me: The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible.

> And my response is—that’s not enough. It sounds like the algorithm, because it is biased, has the effect of increasing the bias in the whole system.

Hence why the humans should be held responsible for not addressing bias in their system. And why the actions of an algorithm should be the responsibility of its creators.

> If your response is that humans should work harder to counteract biases in machine systems, well, I think that’s just a way to CYA and assign blame but not a way to solve the problem—humans will remain biased, and they will trust automated systems even when that trust is misplaced.

So...? What’s your solution? All you’re saying is that humans will remain bias, yeah they will. That’s why we have laws that punish discrimination and bias. If your company creates products (algorithms) that discriminate, you should be held responsible. The human element is not there to “work harder” but to assure that what you’re releasing works properly. If you don’t think increased accountability fixes the problem, please tell us what would be “enough”.

> This argument assumes that EITHER access to guns is to blame OR people are to blame, but not both

No assumption. If you think a cop can have a gun but a criminal can’t then the gun isn’t the problem. If you believe cops can have guns but civilians can’t then the main factor is the person with the gun and not the gun itself. This isn’t an argument against increased restrictions and if you believe no one should have guns (including the government) im all for it. But if you believe someone has the right to have guns while others don’t, im hard pressed to see any other determining factor except who has the gun.


> I do think you missed the point.

Please make an effort to engage with the comments I make, rather than making guesses about my mental state.

> Me: The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible.

The human bureaucracy doesn’t do that very well. The human bureaucracy is deeply flawed and has limited skills. We can assign blame to the human bureaucracy for its failings all we want, but if we want to effect change then it’s necessary to include a broader range of factors in out fault analysis.

In other words, “assigning blame” is a low-stakes political game, and “root-cause analysis” is what really matters.

This is like the 737 MAX failures. You can say that it’s the pilot’s responsibility to fly the plane correctly—but the fact is, pilots have a limited amount of skill and focus, and can’t overcome any arbitrary failing of technology. So we rightly attribute the problem to the design of the system, of which the human is only one component.

This grading software is like the 737 MAX—it’s software that, as part of a complete system including non-software components like humans, does a bad job and needs repair. The 737 MAX reports listed something like NINE different root causes.

I don’t understand this absolutist viewpoint that the human bureaucracy is the ONLY thing that you need to protect you from bad software. There are multiple root causes, and the bad software is one of them.

> Hence why the humans should be held responsible for not addressing bias in their system. And why the actions of an algorithm should be the responsibility of its creators.

So you’re saying that there’s a problem with the software, and that we shouldn’t place all the blame on the college administrators? Isn’t that what I’m saying?

> But if you believe someone has the right to have guns while others don’t, im hard pressed to see any other determining factor except who has the gun.

I do believe that not everyone should have the right to own guns, but if you’re interested in arguing with me about it, I won’t engage. If the comparison doesn’t work for you, think of something less emotionally charged like the 737 MAX or the Tesla Autopilot—both are scenarios where we rightly cite the software / automation as a root cause in accidents.


> I don’t understand this absolutist viewpoint that the human bureaucracy is the ONLY thing that you need to protect you from bad software. There are multiple root causes, and the bad software is one of them.

There are multiple intermediate causes, and all of them are the responsibility of the human bureaucracy—including, to the extent it contributes, the selection, use, and failure to correct bad software—and all of them stem from one root cause, to wit, that the bureaucracy faces insufficient consequences for it's failures and thus lacks motivation to do it's job well.

Now, were the analysis being performed on behalf of the bureaucracy because they had decided to do their job, rather than being part of a discussion outside of them, the causes which are intermediate from a global perspective would be root causes, sure. Context matters.


I will agree that certain biases are almost certainly baked into the software and that they will disadvantage anyone (and any situation) that isn't considered 'normal' by the software's creators.

If your argument is that they should be paying a person to sit here and watching a class of students while they're doing the tests, I'm not against that. They probably should.

But humans will always attempt to make their own work easier and less time-consuming, and this is a tool for that. Eventually, something like this is going to exist for distance learning. This is unlikely to be the final configuration of that tool, but it's a step on that road, no matter how much people don't like it.

What's needed are proper controls on the usage of the tool. And proper training. And proper oversight.

If your argument is for something else instead of the above, then I don't know what your solution would be. "Don't have school" and "don't worry about cheating" aren't acceptable.


Except that the software is generating "scores" based on facial movement/expressions. Facial recognition/scoring is known to result in racist outcomes (links in article).

The software isn't suited to its purpose.


No the problem is software.

You give people a tool and they’ll use it. Taking a measurement changes the subject being measured, showing a metric changes the judgment of the watcher. You have to be an extremely thoughtful and interested person to be shown a metric and have it not color your opinion if it has no value.


Voice of San Diego has been doing some great reporting. This is yet another example.

Profs who use these things are examples of teachers who just don't care enough to revamp their courses to involve more project, critical-thinking assignments, or exams where cheating won't help. They don't want to do the work to adjust the course for remote learning, and just give the course that they always teach, and do whatever it takes to get as close to the in-person exam that they used to have.


What is the hourly rate that you would expect for a professor?

Consider that a tenure track professor (ignoring adjuncts) making $60,000 is expected to teach 3-4 classes, write original peer-reviewed research, and perform "service" (which takes the form of helping the university run itself, external speaking events, and advising.)

Each individual class takes 15-20 hours a week for a course they haven't taught before (which is common). Some classes require managing a group of Teaching Assistants for classes of 500 people. Other universities may have a teacher responsible for as many as 70 students without a TA.

So, that ends up being a 60 hour week. (This is average; I've seen more.) So, without overtime, that ends up at a little more than $22 an hour for a 10 month contract.

(The summers, in which they are either paid a small sum for summer classes, are not included. However, that's also when they are expected to research and write and publish, all technically unpaid. If you consider that time as paid, drop the $22 to $20 an hour.)

So, when you talk about "not caring enough," consider that they do not get stock options for working another 20 hours a week.

Further, a large portion of their job is bringing prestige to the university. That is why their job is based on the output of their research rather than their teaching. As such, in the first 7 years of the job, they are being judged on their publications first and their teaching second. As such, you are upset with a professor because they are only working 60 hours and not dedicating more time to what is a secondary concern for promotion and, for pre-tenured professors, keeping their job.

If you want professors to care, you need more lecturers paid the same amount to teach fewer courses. However, that makes university more expensive. (Or, you end up hiring adjuncts making sub-minimum wage for the same workload, sans research.)

It's a hard problem that looks easy on the outside.


What professors? The proportion of college classes being taught by professors is below 50% and dwindling. Most college teaching is done by adjuncts.


I mentioned adjuncts in there as well.


This has been my experience in college so far. My technical college I was at before had better policies for online learning and preventing heating by just making you work more so that cheating would just make it harder toward the end.


These 'anti-cheat' softwares don't actually prevent cheating. One of my teachers tried to use one called MyPerfectice (thankfully we persuaded her to switch). I think most of these are built without any understanding of the constraints set up by the browser. I don't know about the others but MyPerfectice gives you 4 or 5 window going out of focus events before making your attempt invalid. The funny thing is that they allow students to upload files from their system and allow focus to be lost during that. So if you click the upload button, it opens the OS file selection window and you are then free from all the checks they set up. Open a new window and cheat as much as you like.

I'm wondering if it's possible to just do a custom build of Firefox/Chrome that does not trigger these out of focus events.

Coming to the video streams, just record a 1-2 minute video of yourself staring into your screen from your webcam and loop it through OBS. No one will notice anything. For MyPerfectice, it is even easier, they have their camera controls exposed as unobfuscated global objects. So you can essentially do something like `camera.stop()` and the webcam light turns off.


There has to be a point where students will revolt against the abusive behavior of universities in the US right?

Or is this just another social pathology that we need to rethink?


They can if they stop enrolling and paying tuition. There are plenty of options online for college-style certifications that most employers in my industry (software engineering) would accept.

I get that other professional training with certifications would be harder to do online, but those tend to be done in community colleges anyway. The most "abusive" shops are universities offering BS degrees to students swimming in debt.


A fairly typical form of exam in Russia is "written-oral", where students are writing a test together as usual, with everyone given different problems to solve/theorems to prove/things to describe, but the time slot is way larger than required for most people, and in the end when you are done a professor (of which several are usually present) sits with you for 5-15mins to discuss whatever you wrote, maybe asks some extra questions, and gives you a grade on the spot.

So, if you copy a perfect solution and have no idea how what you copied works, you are likely to fail anyway; if you just happened to get unlucky with your problems you may still be able to talk your way into a B/C showing you know stuff.

This sounds like an absolutely perfect match for zoom, since the problem where there are a bunch of people in the audience constantly discussing some other problems (or problems similar enough you can get a hint) doesn't exist anymore, and you don't need to book a large room for 6 hours so the stragglers could all be talked to by professors.


This reminds of what Filtered.AI does. Except that it does it for some sort of automated job interview. When interviewing through them, I had to install a Chrome Plugin that recorded audio, video, and browsing activity at all moments. But I guess they're just trying to fix the software engineer interview process.


Gross. If someone asked me to install spyware for a job interview I'd blacklist the company as a place I'd never want to work.


Seconded. Who would want to work at a place that installs surveillance-ware on the personal machine of a prospective employee?

If you're technical interview team can't tell that I didn't do technical work myself within about 5 minutes , then there is a lot wrong with your whole organization.


I’m skeptical that you could train an DNN (or any AI) to detect cheating except maybe in browser usage patterns, but recognizing facial patterns!? What data set could they possibly have found to detect such a thing and what proof do they even have that it’s detectable?!


In a couple other thread people recommended Grammarly. I'm giving it a try right now.

I without question wrote every bit of this text. It's saying I plagiarized. I feel like plagiarism software isn't quite there yet.

1 thing is for sure though. I certainly don't use enough commas.


My wife teaches CS (remotely right now) and I feel like a lot of the commenters here don't really understand what "cheating" consists of right now. The common vectors are:

- Finding an answer online (on Chegg), and copy-pasting it. For homework, this is super common. Unless you build every assignment from scratch each quarter, there will be an answer available online.

- Paying someone to take the exam for you.

"Let them use wikipedia" resolves like 0% of the problems people actually run into when teaching remote classes. The problems are when people cruise through a class with 0 actual effort, through a combination of looking up answers online, and getting someone to just sit exams in your place.


At least the online class cheating increases fairness. Now you can cheat without having the money to purchase a bogus disability claim. For many years now, all the well-off schemers have been taking tests with twice as much time as normal. Nobody in academia dares to push back at this fraud, out of fear that they might be pilloried for failure to accommodate. Honest students usually aren't even aware of just how unfair the system is. It affects high school, college admissions tests, and college.

The whole system is put at risk by the fraud. Grades become less meaningful because we've added a randomish negative signal. That devalues everything.


I had a lot of mental health issues and I registered with my school's accessibility services. I was surprised I could get extra time on tests, extensions, etc. so easily. Unfortunately, none of those accommodations were even helpful for me. An extension would just make me fall further behind on other work, and I didn't have test anxiety or a disability which would prevent me from taking tests quickly (though I was offered this accommodation anyway).

The only accommodation that I took once or twice was having a teacher re-weight marks from a missed assignment to other assignments, which was only good for me if I was going to do well on those other assignments. I think for mental health, the most important things my school did for me weren't even accommodations. They were: letting me reattempt courses any number of times, letting me take a year off without even having to notify them, letting me take a reduced course load, and stuff like that. These things give no advantage to students who utilize them, but can make a world of a difference to students who need them. Without them I'd have failed my degree, but now I'm mentally healthy and I'm going to graduate soon.

Anyway, it's shitty that some people abuse accommodations, but most of them don't give them an advantage over other students, and some students do need them, so getting rid of them is not really a solution. The main one that would give an advantage is increased test time (the solution to which is to write exams where students already have plenty of time -- the majority of my profs already do this).

As for the whole system being put at risk, I think the whole system is going down the shitter anyway. Why do I need to pay thousands and thousands of dollars a year for an education which could easily be delivered for less than a tenth of that price? Prestige? I don't think universities will survive the modern Internet outside of programs which need hands-on lab experience.


Getting rid of accommodations in general is not the right thing to do, and I am very glad that you were able to get the sorts of support you actually needed, as opposed to just the cookie-cutter extra time on exams.

That said, giving time and a half or double time on exams to anyone who takes the time and money to track down a compliant enough medical professional to diagnose them with ADHD is not the right thing either. And yes, some people really do have ADHD and need various support, not limited to extra time on exams, to succeed in the school environments we have created. But _so_ many people are diagnosed with ADHD who have nothing of the sort...

Anecdotally, my acquaintances who teach at elite liberal arts colleges say that 25-40% of their students nowadays are receiving extra-time accommodations, a sharp rise over just 10 years ago. It's possible that the prevalence of ADHD and various mental health issues really is that high amongst high-performing teenagers nowadays; the college admissions crapshoot and lead-up to it surely is not helping with mental health. But I find it much more likely that there's a significant amount of gaming the system going on, unfortunately.


This would fix the problem: if a student in a course gets an accommodation, all students in that course get the accommodation by default. It would be OK to opt out, so for example we don't force everybody to use a screen reader. (but you get one unless you opt out) Clearly this solves the extra-time fraud.


The problem with this is that then the legitimate ADHD students are then at a disadvantage to the other students, as they can't use the time as effectively as the other students.


I haven't been buried on HN for awhile so I will go ahead and present my opinion.

It's clear that the proper execution of anti-cheating is critical to avoiding a situation where the cure is worse than the disease.

Having said that, I do think that it is needed and will improve college efficacy.

Cheating in college has been an epidemic, with a majority of students surveyed saying they have cheated, and a significant percentage of those saying that it's acceptable.

If the courses are too hard or curriculum irrelevant or course loads too high, then those problems should be fixed. Cheating is not the answer.


Why are we building exams that are only valid if people don't use google? Will google suddenly disappear when I get a job? Modern exams should require me to use google.

What we need is a way to have every exam question be unique to each student. If a student google the "how to" and then does it, great!

Is it fitting that Vernor Vinge wrote about this in Rainbows End [1]? He was a professor at SDSU.

[1] https://en.wikipedia.org/wiki/Rainbows_End


I just got rid of my exams for my deep learning courses once the pandemic hit. It doesn't make sense to have these traditional exams with remote education.


Boom. That’s it. It’s not cheating that’s the problem—that would be an epitome of a red herring. Emulation of a traditional environment with online learning software is not how it was intended to be used, nor is it practical.

Even when I was in college, the number of traditional courses serving additional content through Moodle/Blackboard was stupid high. I understand honor system and all that jazz. But really, if the ~15-some athletes in a business 200-level section are going to gather in the library to cheat together...then maybe the quiz should have been delivered traditionally. At least then, I wouldn’t have to hear the half-a-class rant from the professor about what losers the cheaters are.


College students are easy to manipulate because they're young and naive and still used to taking orders from control figures like their parents. They need to realize the power they wield and simply refuse to participate in this intrusive monitoring. They don't have to play the game, and the school can't exist without their tuition.


This sounds like a nightmare. Just use all this time we have to give 1:1 verbal exams to test knowledge and gaps.


Then you’re mainly testing oral English skills.

Also, in most subjects I could mark an exam in a fifth of the time it took a student to write - whereas with oral exams it takes at least as long as the exam took. Quintupling the load on TAs and profs sounds like a raw deal.


I curious if proactive measures to curb cheating like inthe parent post results in lesser people cheating outside of college environment.

Any controlled studies?

Are college students more ethical and honest than their non college counterparts?


> results in lesser people cheating outside of college environment

What metric are you using to determine which people are "lesser"? Less educated? Less motivated? Less intelligent?


Sorry, I meant lesser = fewer.


Stupefying how this can be happening when we consider the vast amount of science-fiction depicting these kinds of situations, maybe someone took them for instruction videos instead of warnings..


As a current college student (33 yr old). Higher education is just a terrible rite of passage. I could give them tips on HN but who am I kidding? I won't waste my brain power on their problems.


So do the students have any legal recourse here? Seems pretty invasive and I personally would not comply l possible but I'm not sure that would be an option.


If a prof needs a final exam to be able to assess a student's proficiency, they are doing it wrong.


Not saying you’re wrong, but what are you proposing as an alternative? How should comprehensive knowledge be tested? Weekly homework/projects?

P.s. love the user name.


For my Masters degree, most of my courses were absolutely like this. Our assignments were difficult and heavy. There was no expectation that you should complete them without Google or reference material.

In contrast, the "final exams" were either non-existent or extremely easy and account for only 10-20% of the grade, where they existed.

I really liked this approach. Make your assignments very difficult. Alternatively, give difficult 10-15 minute quizzes every other week. IMO, it's a much better way of evaluating students.


Comp Sci majors are the ones that are definitely beating the system though. Speaking from experience.


Software people in general like "hacks". We game hiring process by reading books like "Cracking the coding interview". However, the hiring process is broken but that is a different story.

I worked super hard in school, did well and pulled several all nighters. I work at an enterprise now and my algorithm knowledge is useless. I just build enterprise web applications, which is fine.

In software any job you take you will be learning new stuff and new way of doing things. I gained much more from classes that focused on building actual working projects instead of strict algorithms and testing. In our AI class we had to pick an AI algo and implement it in a popular game, one of the best experiences in school.


Ah yes. Did you use Ghidra, IDA Pro, Binary Ninja, Hopper Disassembler, Radare2/Cutter, or something else? How many bytes did you have to modify?

Put it on your resume.


Sounds like intercepting the video buffer in kernel-space would be a good start.


All the "anti-cheating" (spying) software should be outlawed.


These companies should actively be put out of business. College is garbage as it is. Now theyre actively suppressing people with lower incomes and lesser resources or abilities.


I felt like this had to do with the generation of hacking kids in COD matches. Then they realize there are actual real life consequences to cheating.


So many privacy flags on this one, I'm shocked colleges are using this software.

Just a few highlights from the article that really stick out:

“You have to record your environment, you have to record the whole desk, under the desk, the whole room,” Molina recalled. “And you need to use a mirror to show that you don’t have anything on your keyboard.”

On top of that, if the wireless connection was disturbed during an exam, Molina said, students would receive an automatic zero — no excuses.

He said he didn’t realize he hadn’t sufficiently shown his notepaper to his webcam, or that his habit of talking through questions aloud would be considered suspicious.

“At the beginning of the exam, you leave the area for about one minute without explanation,” Merrill wrote in an email to Molina. She added that it looked like he was using his calculator for problems that did not require a calculation and that he solved certain problems too quickly. As a result, Molina was given an F in the course and his case was submitted to SDSU’s Center for Student Rights and Responsibilities, where he could appeal the decision.

Neekoly Solis, an SDSU junior and first-year transfer student, said each test-taker now has to verbally explain each of their calculations to their webcam every time they use their calculators during an exam.

Then, she had to show the camera her desk, and underneath her desk, with her bulky desktop computer. She realized she was in a pair of shorts, and her webcam was picking up — and recording — seconds of her bare legs that could be seen by her older male professor. She was creeped out.

“You have to do a crotch shot, basically,” said Jason Kelley, associate director of research at the Electronic Frontier Foundation, a digital privacy group based in San Francisco. He recalled watching a tutorial video from another proctoring system called HonorLock, horrified as he watched the video subject do a long pan of their body.

Some other unsettling parts about the data their collecting:

Respondus’s website states that the default data retention period for Respondus Monitor is five years, but the client can change that.

And worse yet, what about the appeal process? Not exactly in the students favor:

Molina appealed. But even well into the fall semester and over a month after the accusations were filed, the office had canceled his scheduled meetings twice due to coronavirus-related emergencies.

After the third rescheduling, Molina finally had the chance to explain himself. One week later, he received a letter of “no action,” meaning the university would not pursue disciplinary action against him. He forwarded the letter to his business administration professors and requested that he get the grade he deserved. He said he had already emailed the student ombudsman twice, and never received a reply. Merrill finally gave him his grade back, almost two months after he’d received an F in the class.

In conclusion, you have a dodgy software program, that's highly invasive to your privacy. It can take months to get your appeal figured out. In the meantime, you're left to twist in the wind. And worst of all, the company keeps your data for five years.


[flagged]


Interesting strategy. What after that?


That's a knee jerk reply.

But there's a point to that.

Democratising education is great for literally every industry except education industry itself.

There should be some way to do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: