Machine learning, concluded: Did the “no-code” tools beat manual analysis?

 In AI, ai/ml, AWS, Biz & IT, feature, feature report, Features, low code no code, low-code, machine learning, ML, no code, sagamaker

Machine learning, concluded: Did the “no-code” tools beat manual analysis?

Serving the Technologist for more than a decade. IT news, reviews, and analysis.
Machine learning, concluded: Did the “no-code” tools beat manual analysis?

Enlarge (credit: Aurich Lawson | Getty Images)

I am not a data scientist. And while I know my way around a Jupyter notebook and have written a good amount of Python code, I do not profess to be anything close to a machine learning expert. So when I performed the first part of our no-code/low-code machine learning experiment and got better than a 90 percent accuracy rate on a model, I suspected I had done something wrong.

If you haven’t been following along thus far, here’s a quick review before I direct you back to the first two articles in this series. To see how much machine learning tools for the rest of us had advanced—and to redeem myself for the unwinnable task I had been assigned with machine learning last year—I took a well-worn heart attack data set from an archive at the University of California-Irvine and tried to outperform data science students’ results using the “easy button” of Amazon Web Services’ low-code and no-code tools.

The whole point of this experiment was to see:

Read 44 remaining paragraphs | Comments

In the finale of our experiment, we look at how the low/no-code tools performed.

Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt