The last few days have been incredible with the amount of contributions pouring in. We had 13 first-time contributors in just 3 days, and the release before that was 14 contributors.
The last few days have been incredible with the amount of contributions pouring in. We had 13 first-time contributors in just 3 days, and the release before that was 14 contributors.
I’m currently working on making it so that fediverse links opened in Jerboa will open in Jerboa. After that I think we could see about how to support that “add more links” setting in the UI.
We just released a big new update to Jerboa that adds a lot of much needed features and polish. We had 14 new contributors too!
There isn’t exactly a roadmap at this point, its sorta a free-for-all with lots of people implementing the features they want. Making issues on github definitely helps visibility and will help it be prioritized once the app is in a more complete state.
So far it’s been good! Lemmy has made me hopeful for better social media. I’m not hugely into twitter-style social media so I was never really able to appreciate Mastadon.
I’m actually quite surprised with how much content is here already. There are regular posts and conversations, and a good mix of content. It’s not at the level reddit is in terms of volume, but I don’t feel starved or anything. I look forward to the future here!
Infinite scrolling is implemented in jerboa, it could definitely be brought to the web client.
There’s an open PR that’ll fix the font size issue. I’m using it now and it’s great. I’m also personally working on trying to add my personal must-have UI options from Boost.
I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.
Phoenix, Arizona, USA. The weather is brutal this time of year, and the metro area is mostly yellow, brown and baise. Not much green to speak of. Winter weather is great though, and a few hours north there are some pretty awesome mountains though, so that’s somewhat redeeming.
Will do, thank you!
Sounds like a NAS to me!
You need an account with that instance to log in. Once logged in though you can post on any community, as long as your instance federates with the instance where that community is.
Is the link to Recipe Filter broken? I’m interested in that but it seems to just be a link to a reddit user.
This is basically my reasons exactly. I use edge as a backup when a page doesn’t work in Firefox, but use Firefox primarily because I don’t want the web to be defined by blink’s implementation. Plugins on Android, while limited, are unbeatable.
I’m doing the same thing. I haven’t written kotlin or jetpack compose code before, but I was able to fix a minor bug that affected pre-login. Hopefully I’ll be able to find ways to contribute more.
This is super exciting. I’m so glad some states and Canada are decriminalizing psychedelics instead of furthering the harm of the war on drugs. Hopefully it’ll mirror cannabis legalization and will be available in most states a decade from now.
It doesn’t necessarily replace search engines, but I’ve been using chatgpt and sometimes Bing chat more and more. Like others have said, it does hallucinate all the time, and cannot be trusted to be 100% correct. I don’t see that as a problem though, as long as I have some way to verify what it says, assuming accuracy is important. The amount of time wasted by bad answers is easily made up with the time savings on correct, or correct-ish answers.
I’m a software engineer, so a common work pattern will be to ask chatgpt “write me code to do X, meeting constraints Y and Z”. As long as the subject isn’t too obscure, it’ll generally produce something I can work with. I then adapt that code sample to work in the actual context it is needed, and then debug it as if it were my own code. Sometimes it’ll make up function and things like that, but I’ll fix those and it doesn’t take any more time than if I had to go learn that function as I wrote my own implementation.
Another scenario is when I get an error I’m unfamiar with. Often times, I can ask chatgpt to explain the error, and sometimes even fix it for me. This usage more directly replaces a search engine. If the fix doesn’t work, then I’ll do it the old fashioned way.
I’m strongly looking forward to github copilot X to be even more integrated than chatgpt in this work flow.
Consider charging at home, if you can. If your typical driving patterns consist of driving <100 miles from your home and it’s possible to plug in at home (a standard 120V outlet is sufficient typically), then you don’t need public charging stations. Just plug your car in at night and it’ll be full every morning.
This is due to poor error handling in the API client code, triggered by the server returning some sort of error. There’s an open issue but it hasn’t been taken up yet.