Unfortunately after getting a little bit sick during the past week I wasn’t able to attend many of the classes that included things such as the gallery walk with my fellow classmates, but luckily enough I had managed to make it to the class that had made use of handwritten sticky notes with some other groupmates! This style of learning definitely wasn’t new to me and my groupmates, and while I hadn’t used sticky notes in a while, my group mates were very understanding when I told them that my typical penmanship could be misunderstood as chicken scratches. After dividing sections of the text among my group mates and I briefly we dove into our assigned parts to begin answering the question “What are the ethical implications of A.i?”
The article that I was tasked with going over was Morantz’ “Among the A.I Doomsayers”, which right off the bat the title grips readers (or in this case; me) with a negative implication of what A.I brings to the table in a modern setting. Within the article itself Mornantz utilizes any and all statements from people either in support of A.I or totally against it. Such references include (but are not limited to) Snoop Dogg, twitter user “Beff Jezos”, and oxford philosopher Nick Bostrom. Colorful references aside, the contents of the article detail that while the ethics of A.I aren’t exactly horrible, they’re definitely not the best route to go either. Morantz details an experiment done by OpenAi researchers in which they instruct an A.I model to get the most points available in a boat racing game, and while the aforementioned model does the task it’s assigned, it manages to do it in the most convoluted way possible. The model had apparently found a way to almost abuse the games point system by lodging itself in a part of the game’s map and spin around in as many large circles as it could.
As funny as a robot spinning in circles really is, it makes me wonder how many times I myself have used the wrong formula to obtain the correct solution in my life. It’s certainly in the double digits, but how am I supposed to count the times when I really don’t know I’m doing something incorrectly? My life certainly wouldn’t be over if people found out, but unlike A.I programs I wouldn’t receive criticisms that my next plan would be to take over the world. That’s why I think that maybe A.I is a tool that could be used in higher education as much as the user wants as long as they also know that the programs used have a capability of being incorrect in terms of how they go about answering questions.