Having a Computer Science background means that AI is not new to me. In fact, I am in an AI class right now! I have also taken classes that explain AI in a different way. I have used AI many times, Grammarly also uses AI and Grammarly is something I use almost daily. I am no stranger to AI and although I am aware of its incorrectness at times, sometimes, I still use it when I am feeling confused and have no desire to go to office hours. During this module, I truly felt the wrath of AI’s incorrectness. I have used AI to explain quizzes to me before (with permission from the professor) and they are blatantly wrong but, they always sound super right. So, I appreciate during this module that the false sense of correctness was pushed. Generative AI is so tempting but, clearly being told that AI is not always right will hopefully make me rely on it less.
I asked Gemini to create a SAMR for ChatGPT, and I found the answer to be very interesting. I read the prompt incorrectly and thought I was supposed to ask our generative AI of choice to create a SAMR for itself. After asking the correct question, I realized GEMINI gave me the same answer twice but replaced the name of the models. I found this interesting because plagiarism is a huge problem in academia and if two separate students handed this in, they would probably be accused of plagiarism of each other.
I found that using generative AI, in general and not just limited to ChatGPT, the hardest information to assess is validity. ChatGPT would just say stuff and if you corrected it, it kinda felt like giving a slap on the wrist. I haven’t fully used GEMINI the way I have used ChatGPT but, I like that you can just “Google it” and they have the ability to be like “hey if you want to see more about this topic, heres a hyperlink.” I think that feature would be really useful when generating ideas. I showed how that was displayed with some screenshots below. GEMINI told me about ChatGPT is more budget friendly, has a greater variety in terms of task ability, and is better at human like conversation. When talking about the task variety, GEMINI didn’t disclose that ChatGPT is not always right. So, this lack of disclaimer might mean people might think ChatGPT is more right.
Playing a game with GEMINI
I decided to play with Google’s GEMINI, and we played a Shark Tank style game. The output GEMINI was giving me truly sounded like the show. It was as if Shark Tank was a book. I could hear the narrator and I truly thought GEMINI nailed the tone of the narrator. I think for the most part it is quite up to date with the time period. I don’t really watch Shark Tank but, I know the main original investors and I believe that was what the game used as investors. The tone of all of the investors was also spot on so, colour me impressed. GEMINI was quite successful in creating a successful game. It truly brought me into the world of Shark Tank. Funnily enough, I didn’t really think of what I was going to be presenting to the sharks. So, the stuff I presented kinda sucks and it wasn’t great but, the responses, in my opinion, were realistic. I included a small snippet of the conversation from the game. I would like to place a disclaimer that The Little Potato Co is a real company that sells potatoes and is not affiliated with the fake company of Little Potato Co.
Ethical Concerns
Some ethical concerns I have would be how we know about the facts of some things being generated. I checked by asking Gemini about the new Wicked movie coming out and surprisingly, Gemini gave me options to further dig into some of the claims it was making. Granted, what I asked Gemini wasn’t super serious and wasn’t for a paper I need to hand in. I wonder, and I would probably test it out some time soon, if it could site stuff from scientific papers and not just wikipedia.
Accuracy Evaluation
When I used generative AI before to explain certain questions for a Computer Science quiz, I noticed that the explanations were often wrong. Although, to me, they sounded super convincing. Whenever I asked a friend to explain the question, she would often point out that the AI explanations were very wrong and would point where in the lecture the topic would be explained. However, I have had fairly correct and in depth responses from generative AI when it came to explaining topics in psychology and biopsychology. I noticed that if I asked clarification on a topic, generative AI has been quite successful in doing so but, once I ask it do solve some quiz questions, its often wrong and the logic doesn’t always follow for the most part. Also, I noticed sometimes when I try to generate code and I give it a new thing to tweak, it doesn’t always do so. Granted, sometimes I am asking for the world in a piece of code but, I found that generative AI to be stubborn at times.
Citations
“create an SAMR analysis of the use of ChatGPT for learning” prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/178b7fd0dc0ee892.
“create an SAMR analysis of the use of GEMINI” prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/178b7fd0dc0ee892.
“Hello everyone! we have these little growers that allow everyone to have fresh grown potatoes during the whole year! I am looking for 1 million dollars for 50% of my company” follow-up prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/178b7fd0dc0ee892.
“Tell me about the new wicked movie coming out” prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/178b7fd0dc0ee892.
“The name of my company is little potato co and the product is an at home potato grower ecosystem so that you have free potatoes during the whole year. I am looking for 1 million dollars for 50% of my company.” prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/178b7fd0dc0ee892.
Link to my comments:
Natasha, loved your post.
When you brought up how Gemini created similar posts regarding two different technologies (with respect to a SAMR analysis) it really highlighted how repetitive and unoriginal AI is – it really doesnt create anything on its own, does it? It just mashes things together given a certain degree of context.
I have found the inaccuracies of AI tools to almost be a bonus when I am reviewing material – I have to constantly be on the look out for false information to the point where I am interactive with the material so much, and with such a heavy dose of skepticism, that it helps me learn the material better in a way… does that make sense? Have you found that as well?
Hey, really enjoyed your post! I totally get what you mean about AI sounding super convincing but being wrong sometimes. It’s such a good reminder to always double-check what it says, especially with ChatGPT. The way you’re using GEMINI is super cool and kind of ties into what we learned about the SAMR model. At the Augmentation level, I love how GEMINI adds hyperlinks to help with research.
Also, that Shark Tank game sounds like a fun way to experiment with AI! It really shows how technology can change up tasks, making them more interactive, like at the Modification level. Do you think GEMINI could help us do new things in learning that we haven’t even thought of yet?
Hi Natasha!
I totally agree with what you said about checking the validity of Ai being the hardest thing. I had a similar experience where I was asking ChatGPT for information and it just completely lied to me and I had to correct it, which is crazy!
I didn’t know how wrong Ai could be with computer science questions, but I guess it makes sense with what we learned this week that Ai is really good at creative human things and not at these high tech type questions. At least not always. Even with the shark tank game you played it’s super interesting how Ai can imitate and handle these scenarios full of cultural knowledge so well.
Do you think if someone asked Ai to write a script for a show like shark tank that you would be able to tell the real script from the Ai one?
Nice blog post!