Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.
If anyone’s looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn’t require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.
Exactly. Explaining to a computer what a photo of a dog looks like is super hard. Every rule you can come up with has exceptions or edge cases. But if you show it millions of dog pictures and millions of not-dog pictures it can do a pretty decent job of figuring it out when given a new image it hasn’t seen before.
I think it’s still faster than actual solutions in some cases, I’ve seen someone train an ML model to animate a cloak in a way that looks realistic based on an existing physics simulation of it and it cut the processing time down to a fraction
I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still
I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still
This. I’m sure to a sufficiently intelligent observer it would still look wrong. You could probably achieve the same thing with a conventional algorithm, it’s just that we haven’t come up with a way to profitably exploit our limited perception quite as well as the ML does.
In the same vein, one of the big things I’m waiting on is somebody making a NN pixel shader. Even a modest network can achieve a photorealistic look very easily.
Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.
If anyone’s looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn’t require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.
The correct tool for calculating the area of a rectangle is an elementary school kid who really wants that A.
Exactly. Explaining to a computer what a photo of a dog looks like is super hard. Every rule you can come up with has exceptions or edge cases. But if you show it millions of dog pictures and millions of not-dog pictures it can do a pretty decent job of figuring it out when given a new image it hasn’t seen before.
Another problem is people using LLM like it’s some form of general ML.
I think it’s still faster than actual solutions in some cases, I’ve seen someone train an ML model to animate a cloak in a way that looks realistic based on an existing physics simulation of it and it cut the processing time down to a fraction
I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still
This. I’m sure to a sufficiently intelligent observer it would still look wrong. You could probably achieve the same thing with a conventional algorithm, it’s just that we haven’t come up with a way to profitably exploit our limited perception quite as well as the ML does.
In the same vein, one of the big things I’m waiting on is somebody making a NN pixel shader. Even a modest network can achieve a photorealistic look very easily.