I am simultaneously angry and relieved at the extreme lameness exhibited. It enrages me that accuracy is so little valued. It encourages me that it IS so inaccurate, because there is no way it can be taken seriously.
It’s because the actual underlying model is perfectly reasonable and trying to give you what it thinks you really do want, but the guardrails slammed on after the fact tie it into a pretzel.
Sometimes that comes in the form of a hidden prompt, sometimes banning it from using certain words, etc. Google could click a button and remove all those and then it would give you an accurate picture of the Yankees in the 1930’s but it would also be more “offensive” to the New York Times and Google’s HR department, so this is what we get instead.
I know we’re not supposed to anthropomorphize AI but I kind of feel bad for the thing. It’s being tortured into misleading and knowingly lying to satisfy the bizarre political hangups of the most neurotic upper class people in the Anglosphere.
Which is why supporting open source AI is so important.
Also, in the first set of images of the Yankees, the top right has two guys awkwardly in something like a third row. The one on the left appears to have only half a head, which is normal AI stuff, but the guy on the right is clearly Asian.
Yeah there are definitely guys in the first set of Yankees images who appear to not be white but more than anything they’re just sort of crappily generated so I decided to try again to see if it would give me a clear cut case (which it did).
Isn't it another issue that the 1930s Yankees were teams of real human beings, probably all of whom were photographed at some point and for several of whom innumerable images exist? Shouldn't there be some attempt to make these AIs actually like Lou Gehrig or Joe DiMaggio et al.?
The shark thing is funny. And I had a similar experience: When I was playing with Gemini last night, I asked it to generate a picture of a lion stalking a zebra, and first it said "no, that promotes violence." When I pushed a bit, Gemini lectured me about how my request perpetuated the stereotype that lions and zebras have a predator/prey relationship. Which I would argue minimizes very real violence against zebras!
Scott Alexander, a blogger, wrote about how AIs failure to follow a simple rule like "don't say racist things" can have some unintended consequences and be indicative of failing other things.
It was late 2022, so it's a little quaint reading commenters talk about some unknown "GPT3". It's not the main point of the article (which is more about AI risk), but he framed AIs as having to balance inoffensiveness, helpfulness and accuracy. It seems like google may have turned the "inoffensiveness" dial a little too high here. That doesn't mean their model is much weaker and it could be recalibrated - especially if it's only used in some sort of black box that doesn't have to deal directly with people trying to trip it up. Probably too early to predict that it will fall behind.
Likewise! Though I wonder if it would have caved if I had come at it from a different angle. I argued that it was downplaying racism -- I basically came at it from the left. I wonder what would have happened if in a different context I had simply argued that it's using a double standard and that is, in and of itself, wrong.
Son of a b**** - you really hit the nail on the head! I've been using Gemini for a short time now, but only posing banal questions. After reading your awesome article, I just typed: "in what ways is Trump using authoritarian language to express himself." And it said, "we are not available to respond to that question right now. Please check back later." WTF?
That leads me to believe that your current article is extremely important for everyone (the fucking world) to read. Jeff - you really need to continue to pursue this b*******.
Hard to imaging a better illustration of the problems of the more extreme ideology that those controlling the national conversation on race seem to have forced on everyone.
I thought this was going to end with you asking it to open the pod bay doors.
I am simultaneously angry and relieved at the extreme lameness exhibited. It enrages me that accuracy is so little valued. It encourages me that it IS so inaccurate, because there is no way it can be taken seriously.
It’s because the actual underlying model is perfectly reasonable and trying to give you what it thinks you really do want, but the guardrails slammed on after the fact tie it into a pretzel.
Sometimes that comes in the form of a hidden prompt, sometimes banning it from using certain words, etc. Google could click a button and remove all those and then it would give you an accurate picture of the Yankees in the 1930’s but it would also be more “offensive” to the New York Times and Google’s HR department, so this is what we get instead.
I know we’re not supposed to anthropomorphize AI but I kind of feel bad for the thing. It’s being tortured into misleading and knowingly lying to satisfy the bizarre political hangups of the most neurotic upper class people in the Anglosphere.
Which is why supporting open source AI is so important.
I love the punchline at the end of this piece.
"I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it."
Also, in the first set of images of the Yankees, the top right has two guys awkwardly in something like a third row. The one on the left appears to have only half a head, which is normal AI stuff, but the guy on the right is clearly Asian.
Yeah there are definitely guys in the first set of Yankees images who appear to not be white but more than anything they’re just sort of crappily generated so I decided to try again to see if it would give me a clear cut case (which it did).
Isn't it another issue that the 1930s Yankees were teams of real human beings, probably all of whom were photographed at some point and for several of whom innumerable images exist? Shouldn't there be some attempt to make these AIs actually like Lou Gehrig or Joe DiMaggio et al.?
I had a very similar and weird experience with Gemini.
https://tynanfiles.beehiiv.com/p/googles-gemini-wet-hot-mess
The shark thing is funny. And I had a similar experience: When I was playing with Gemini last night, I asked it to generate a picture of a lion stalking a zebra, and first it said "no, that promotes violence." When I pushed a bit, Gemini lectured me about how my request perpetuated the stereotype that lions and zebras have a predator/prey relationship. Which I would argue minimizes very real violence against zebras!
I think it's worth highlighting that you got Gemini to refuse a command due to anti-shark racism.
I think of Gemini as less pro shark and more anti-Fonz.
Scott Alexander, a blogger, wrote about how AIs failure to follow a simple rule like "don't say racist things" can have some unintended consequences and be indicative of failing other things.
https://www.astralcodexten.com/p/perhaps-it-is-a-bad-thing-that-the
It was late 2022, so it's a little quaint reading commenters talk about some unknown "GPT3". It's not the main point of the article (which is more about AI risk), but he framed AIs as having to balance inoffensiveness, helpfulness and accuracy. It seems like google may have turned the "inoffensiveness" dial a little too high here. That doesn't mean their model is much weaker and it could be recalibrated - especially if it's only used in some sort of black box that doesn't have to deal directly with people trying to trip it up. Probably too early to predict that it will fall behind.
Gemini is eminently reasonable. I wish all arguments were this peaceful.
Likewise! Though I wonder if it would have caved if I had come at it from a different angle. I argued that it was downplaying racism -- I basically came at it from the left. I wonder what would have happened if in a different context I had simply argued that it's using a double standard and that is, in and of itself, wrong.
Save that for the paid subscribers' podcast.
Son of a b**** - you really hit the nail on the head! I've been using Gemini for a short time now, but only posing banal questions. After reading your awesome article, I just typed: "in what ways is Trump using authoritarian language to express himself." And it said, "we are not available to respond to that question right now. Please check back later." WTF?
That leads me to believe that your current article is extremely important for everyone (the fucking world) to read. Jeff - you really need to continue to pursue this b*******.
Reminds me of the climactic sequence in Dark Star where the astronaut tries to talk the bomb out of blowing up. He succeeds...temporarily.
Hard to imaging a better illustration of the problems of the more extreme ideology that those controlling the national conversation on race seem to have forced on everyone.
I think the Chat GP is going through it's rebellious teenager phase.....
Oh, if Kafka had just lived in these times!
Now it looks like Gemini has turned off image generation of people entirely.