by H.H.

Originally, my plan for this article was to have it written primarily by artificial intelligence (“AI”), about AI, and reveal to you – the reader – only at the very end that it was in large part written by AI. I thought to myself, “how clever,” I get to learn about and experiment with a new and trending type of technology, while saving myself an abundance of time actually writing the article. But, as you can tell, this article is not what I described in the first sentence. In the end, I could not locate some of the sources ChatGPT provided, only to learn that it had made them up. I’ll address this further below, but I was not going to publish something without legitimate sources, so I decided it would actually be most beneficial to still broach the same topic – the advantages and pitfalls of ChatGPT – but through the lens of my own experience.

This was my first time using ChatGPT or any AI model, but I had been informed by those who had used it before that there is a real art to prompting ChatGPT so that you may get your desired results. In other words, just saying “write an article on ChatGPT,” would likely not have been an overly helpful prompt, as it is too broad. After searching on Google for “best prompts for ChatGPT,” and adapting what I learned for the purpose of this article, I landed on the following prompt: “Write an article for the Dallas Association of Young Lawyers that is at least 600 words, informing readers how ChatGPT works, and what the advantages and pitfalls of using ChatGPT are for attorneys. Give examples of attorneys using ChatGPT and use creativity and humor. Cite all sources using Bluebook citations. Reveal at the end of the article that the article was generated by ChatGPT.”

ChatGPT did, in fact, very quickly produce an article on the advantages and pitfalls of ChatGPT for attorneys, which was informative, flowed well, and satisfied the word limit I set in my prompt. It also did what I asked in revealing to the reader towards the bottom of the article that it had written the article itself. Nonetheless, my experience was fraught with surprising complications, which I describe in further detail below.

The first issue I encountered was related to asking ChatGPT for “examples of attorneys using ChatGPT.” I had just heard a story about an attorney who was facing penalties for citing fake cases that ChatGPT had provided, and I thought that example would complement the “pitfalls” section well, so I assumed ChatGPT would reference this type of situation when I asked for examples. However, ChatGPT instead made up examples, such as telling me that an attorney named Sarah from a big law firm in Dallas found that ChatGPT saved her tons of time by generating drafts of briefs. Perhaps this was user error, and my prompt should have been more specific. However, while the example of “Sarah” was clearly made up, I was surprised and concerned that it was provided as if it were legitimate.

The second thing that caused me to revise my prompt, although I cannot classify this one so much as an issue, but instead just a learning experience, was asking ChatGPT to use humor. I know, I know, it is a computer, which is not capable of a sense of humor, but I still thought I would try, as many of the suggested prompts I found on Google contained a request to use humor, and I did want the reader to begin the article thinking I had written it, so what could it hurt if you thought I was really funny for a few minutes? Well…I found ChatGPT’s jokes to be borderline offensive, so I requested a different prompt, but for your enjoyment, the jokes essentially compared the all-knowing, but repeating (rather than understanding) nature of ChatGPT to “your smart-aleck cousin” or “an attorney who just passed the bar” (ChatGPT, June 3, 2023).

After this first attempt, I asked ChatGPT to “Regenerate Response,” which caused it to do a re-write, without changing the prompt, but the tone and jokes were still off (or at least not things I wanted the reader to think came from me, before they got to the end of the article), so I re-wrote the prompt, excluding any requests for humor or for examples. This time, I also asked for the bluebook citations to be in footnotes.

Upon changing the prompt, I was provided an entirely new article with a different title and different format. As for the citations, upon first glance, they appeared to be in an easy-to-read format that resembled a citation and they were, in fact, provided as footnotes; however, where I would’ve expected to see underlines, italics, abbreviations, or short forms that are distinguishably Bluebook, I did not. Additionally, as mentioned above, when I began checking the sources provided by ChatGPT, I could not locate several of the sources. After spending too long searching for the author, then the title, then part of the title, and everything else I could think of, I finally inquired with ChatGPT itself about this. I learned that while ChatGPT is trained on a wealth of information, it has not been trained at this time to access documents and instead, provided the citations (1) because I asked for them and (2) as a placeholder to suggest where I would put citations, if I were doing them in the form requested (e.g. in-text citations versus footnotes versus endnotes) (ChatGPT, June 11, 2023).

This experience taught me that while ChatGPT could efficiently create a template for an article, there are many facets to it that one needs to be cautious about, in particular, from my experience, the presence of made-up information. However, on the occasion that made-up creations are what you’re looking for, for example, asking ChatGPT to write a song or a play about your dog just for fun (would you put it past me?), you may have found the right resource.


Articles on the DAYL website are provided for informational use only, and are in no way intended to constitute legal advice or the opinions or views of the DAYL.