tl;dr - CodeWhisperer is a decent tool for a beginner/intermediate coder like myself to quickly write generic functions using common packages or APIs, and maybe to learn how to better comment and structure code too
I got preview access and took CodeWhisperer for a spin. Here’s what happened.
Yesterday I wrote this little function while working on a script that generates HTML reports from field data captured with Esri’s Survey123:
Some context: I was looking to generate a single html file for each report to simplify archiving. Reports include images and I was looking to embed them. Exporting html to pdf would do this but brings in formatting issues (which is why I switched to html from the original .docx output used in the script I modified) and further dependency requirements (potentially problematic when running on our Jenkins server). base64 allows you to embed image binary data directly with a giant text string.
I deleted my function from the existing code, and then provided this slightly clearer comment as a prompt for CW:
I called for suggestions (ALT+C) and got:
Nailed it. Getting more specific:
Super. This actually improved on my code by using a method I wasn’t aware of - decode() - to get rid of a couple little tags which I had stumbled over yesterday and resolved via string indexing.
And one more ask:
Pretty much identical with my code. All of these were the first suggestion CW provided, though you can easily flip through more suggestions with the arrow keys. Flipping through the suggestions shows some interesting differences in commenting and style. This could be due to the variation within my script or the training data. How about f-strings every time, please?
Taking it a step beyond what I had initially written (only jpg in my project):
No dice. At first. But then I added (jpeg or png) to the comment, flipped through a few suggestions, and voila:
I guess I will have to forgive the format() method. Maybe if I had always used f-strings in my script, then the suggestion would have done so as well?
How about some arcpy?
The little bit of reading I did on CW explains that it is trained on a “variety of data sources including Amazon open source code.” Since geospatial and arcpy are a little more niche, I thought it would be interesting to see how CW performed here.
I started with this comment block in a new file:
Which led CW to give me these lines one at a time:
Not too shabby. Repeating the process it was funny to see the different filler filepaths that were suggested.
Let’s try and be more specific, modifying only the last comment:
Hmm not so sure those last couple arguments would do what we wanted. Maybe if this was inside a function?
Pretty nice. I like that it didn’t include all the optional arguments in CreateFeatureclass(). But why not use DATE as the field type for a field called date?
How about jumping into a larger arcpy script?
I tried adding this to the end of an existing geoprocessing script (~500 lines):
What if we try in a new file?
Hey that works. My first thought was that there no need to get the input spatial reference.
Flipping through a few suggestions, I found:
Aha - now I see why there was the inclination to check out in the initial spatial reference (duh!). Learning from AI… this is interesting.
The very rigorous style used here is a little funny because of the inclusion of blank return statements. But maybe the more advanced logic is part of the package. It seems like small variations in the prompt, even coming down to styling, can lead to more or less functional code suggestions.