I trialed GitHub Copilot and used ChatGPT for a bit, but recently I found myself using them less and less.
I’ve found them valuable when doing something new(at least to me), but for most of my day-to-day it seems to have lost it’s luster.
I just don’t trust these tools to write code as efficiently as I can, knowing they are just backed by LLMs. If I have to spend my time vetting what they spit out to ensure correctness, efficiency, security, etc, then I might as well just do it myself from the beginning. I’m sure some find these tools useful and timesaving, but they’re not for me.
They’re very useful for the boilerplate stuff and it’s somewhat rewarding to type out 3-4 letters, hit tab and wind up with half a dozen lines in a bash script or config file.
They tend to get in the way more for complicated tasks, but I have learned to use them as a psychology trick: if I have writer’s block, I just let them pump out something wrong since it’s easier to critique a blob of text than a blank page.
if I have writer’s block, I just let them pump out something wrong since it’s easier to critique a blob of text than a blank page.
Yeah I mentioned this before while taking to a friend about it. Humans are much better at editing than coming up with stuff from scratch, so seeing the suggestion sometimes is helpful even if it’s wrong.
I’ve been using ChatGPT4, through the phind.com web site because it allows one to include web links which phind.com pulls information from and then includes within the context info delivered to ChatGPT4. This has proved to be invaluable when trying to figure out new libraries - I just include a link to their documentation and start asking my specific integration/usage questions.
I’ve also been learning how to write my own Stable Diffusion implementation, and Phind.com’s context packing functionality has been extremely helpful explaining and describing how components work, how they integrate, and explaining the aspects of the papers this work is based I am not confident I completely understand. It’s a tireless explainer, which never gets bored and always responds with a chipper attitude.
Oh wow! That is really cool. I used Google Bard for a bit and liked it because it included some web links, but I found the answers not as good as ChatGPT(especially GPT4), this looks like the best of both worlds.
I was skeptical at first, but after using phind.com it partially changed my opinion on using AI for development assistance.
It massively helps me to filter out information and leads me to the right answer.
Like the other day, I searched on how to write some Latex symbols or how to use Java Stream API, it spit out the result immediately that saved my precious time on searching the Internet.
I don’t use CoPilot though.
Is it entirely free? Could not find an answer on their website.
Yes. It’s free and doesn’t have any limitations on usage for now (unless you use “Expert” model).
But I guess you’re sending your query data to Phind so it’s not totally “free”, as you’re selling your query data.
I use ChatGPT (with GPT-4) all the time for coding. I’ve developed a feel for the maximum complexity it can handle and I break down bigger problems into smaller subtasks and ask it to write code for them (usually one function at a time, after a detailed explanation of the context in the beginning). I need to review and test everything it produces thoroughly but it’s worth it. Sometimes it helps me complete tasks that would have otherwise taken a day to complete in 1-2 hours.
I also have Copilot installed but it isn’t as useful as ChatGPT. It’s nice to get a smart completion sometimes. I’m even in the Copilot Chat beta which uses GPT-4 and I find it inferior to ChatGPT with GPT-4.
I never touch GPT-3.5 anymore. It hallucinates too much and the quality of the output is very unpredictable. I guess most people who say AI is useless for coding haven’t tried GPT-4 yet.
Oh, and something else. In my experience, the quality of the output depends a LOT on the prompt. If you give a clear, detailed description of the problem, and try to express your intent instead of the specifics of the implementation, it usually performs very well. I’ve seen some people at work write the worst, sloppiest prompts and then complain how useless ChatGPT was.
This is really useful info, can you recommend a tutorial that you feel shows how to effectively use these tools along with traditional style coding? Or would you say it’s just a try and see approach/learn as you go. Personally, I think your comment best demonstrates where we are right now with AI assisted development.
Unfortunately the tutorials out there are mostly terrible. I’ve learnt it by experimenting a lot and seeing what worked for me. Some general advice:
- Subscribe to both Copilot and ChatGPT Plus and try using them as much as possible for at least a month. Some people prefer the former, others the latter, and you can’t know in advance which.
- Always use the GPT-4 model in ChatGPT but keep in mind that there is a 25 answers/3 hours rate limit. So try to squeeze as many questions and information into your messages as possible. GPT-4 is miles ahead of any other publicly available LLM, including GPT-3.5.
- Tips for ChatGPT:
- Give detailed, well-written prompts. Try to describe the problem the same way you would to a coworker.
- After describing the problem, ask ChatGPT if it needs any additional information to implement the code well. It usually asks very insightful questions.
- Answer the questions and then ask it to break down the problem into individual functions and then, in separate messages, ask it to implement them one by one.
- Remember that the context window is limited, after some time it won’t remember the beginning of the conversation so it’s worth repeating parts of the specification later.
- Tips for Copilot:
- Write the method signature and have Copilot implement it for you
- Write a comment and have Copilot implement the corresponding code
- Paste the code as a comment in a different language, write “the same logic in $lang2” in a comment, and it will translate it from $lang1 into $lang2.
I find ChatGPT more accessible and usable than GitHub Copilot. I did initially use Copilot when it was first released, but I found myself being interrupted by the suggestions. There were times when it was useful, but it got in the way more often than not. I’ll concede that Copilot is really good at suggesting dummy data.
With ChatGPT I tend to explore a problem and a solution - it’s more of a purposeful back-and-forth. I will often ask for example code, but even if I use it, it will most often be re-written. The key thing here is that I am stepping out of my editor to interact with ChatGPT, and that works really well for me - I’m in a different thinking state, and I find this a very productive way to work.
I still have copilot on but I find it not really useful beyond very simple things. It is a smarter autocomplete, so it’s nice. But you always need to have your brain turned on because it definitely invents things.
It’s also sometimes entertaining when it makes things up. I especially enjoy when it makes up entries in the changelog.
As for ChatGPT, I use it occasionally mostly for tedious things I don’t want to spend time on. But I’ve definitely used it less lately. The hyper has faded.
ChatGPT 4 Helps me with better code, content ideas and mindstorming.
GitHub Copilot Speed up my writing since it can take context and give suggestions. I also have the Beta version with the chat but it is as good as ChatGPT 3.5 so i don’t use it.
Stable Diffusion I can’t draw so i use a lot of AI images, but mostly for placeholder.
Stable Dreamfusion Tried, did not get good results, don’t have enough vram.
NeRF Tried, haven’t use it more then a few test run, but will use it more in the future for when i make a 3D Game.
I feel a similar way - I just use text generation AIs to find out new approaches or different ways to accomplish something.
My use case is primarily to list and briefly explain several pros/cons of an approach, which can provide a good starting point for further reading, or where to look in the docs.
I use GPT4 regularly. I find it really helps with brainstorming or thinking through a problem. The more I use it the more I learn about how it can help. Copilot is convenient sometimes but I wouldn’t be upset if I couldn’t use it anymore.
I am an avid user of Copilot. I don’t have statistics, but I’d say it writes about 10-50% of my code. It’s not providing great ideas about what the code should do, it mostly just automates away the obvious stuff.
It works especially great if your code is well documented and everything has sensible naming. Which is a state you want to be in anyway.
On the other hand, it helps you document the code and create useful error messages, since writing verbose text is much easier with it, and it can often generate useful text from just a few characters, given the surrounding context.
I also use it as an adhoc docu search when working with libraries that I am not very familiar with. What’s the arguments called for turning the label text green? Just add a “green text” comment on the line and use the suggestion that copilot spits out. This works very well with popular libraries in Python, which usually don’t have great type hints.
Another thing I find it useful with is math. Did you know that Copilot can even generate derivatives of math functions? Not a 100% correct every time, but when I have to do some “quick maths” like doing coordinate transformations or interpolating keyframes in an animation, I get the correct formula given the variables I have, in 90% of the time, autocompleted in about a second.
All in all, copilot saves me a bunch of time and keystrokes. If you write your code in an explicit, explainatory way, it just does what’s obvious, leaving you to think and elaborate instead of typing that all out.
As for ChatGPT, it is sometimes useful to figure out what API I might need in a specific situation, or explain error messages. I don’t use it as often, especially not to generate a bunch of code I want to use in my project. That’s Copilot’s job. But ChatGPT can explain a lot of things, and you can describe your problem much broader than with Copilot.
Also GPT 4 is much better. But with twice the price of Copilot (in my country), it doesn’t bring as much bang for the buck, at least for my kind of usage.
I was thinking that one effect copilot-like tools will have in projects is more comments describing the code. Because copilot can both help with the code if you document it well as it can document code well with descriptions and their parameters.
I used to ask things on the chat. But now when I think on trying I end up thinking on how I’m going to have to deal with trying to avoid it making things up and it puts me off.
I set up CoPilot with neovim and am using it on a few projects. The domain and architectural style of these projects isn’t common. Asking it to generate entire functions seems not very useful.
But, when asked to either suggest the next line of code or complete a partially-typed line, it has been pretty useful if it is following a pattern established in the code already. Here are two examples:
Testing:
test "record creation" do record = create_record("Pat", "This is a description of Pat") expect(record.name).to eq("Pat") # I typed this expect(record.description).to eq("This is a description of Pat") # CoPilot suggested this end
It’s pretty good at this part of it as long as there is a pattern
Repetitive stuff based on common things:
COLORS = [ "red", # I typed this "orange", # CoPilot suggested this "yellow", # CoPilot suggests this after I accept the previous line # and so forth
So, asking it to come up with something out of thin air is usually worthless, but it has been helpful at automating repetitive tasks that are somewhat small and not really places where a library or abstraction would be better.
I use ChatGPT to write scripts for me all the time. For example, automating stuff in Google Docs/Sheets used to be a massive pain, but now it’s a breeze. GPT is quite good at google apps scripts.
I use ChatGPT with GPT-4 as a search engine when a Google search doesn’t immediately turn up the answer I’m looking for. ChatGPT won’t necessarily give me the right answer either (though sometimes it does), but reading its answers almost always causes me to think of a better search query to plug into Google. That doesn’t sound like much but it can save a lot of time compared to stumbling around trying to figure out the right keywords.
Occasionally I ask ChatGPT to write code samples, but (though they’re way better than GPT-3.5) they still hallucinate a bit too much, e.g., inventing library functions that don’t exist or, worse, inventing plausible-sounding but wrong facts about the problem domain. For example, I recently asked it to write some sample code to work with geographic data where the coordinate system could be customized, and it invented a heuristic about coordinate system identifiers that is true most of the time but has a ton of exceptions. If I didn’t already know better, I might have tried it out, seen that it appeared to work on a simple happy-path example, and accepted it without knowing it was going to break on a bunch of real-world data.
Every once in a while I give Copilot another shot, but so far, I’ve always ended up turning it off after realizing that I’m spending more time double-checking and fixing its code than it would have taken me to write the code by hand. To be fair, I’m usually working on backend code in a language that doesn’t have nearly as much training data as some other languages. Maybe if I were writing, say, Node.js code, it would do better.