How AI and Machine Learning Are Changing Software Development

Written by Santiago Castro
Technology

Artificial intelligence (AI) has been a buzzword - perhaps the buzzword - for decades. Inspired by early 20th-century literature, a new generation of scientists got thinking about intelligent machines. Alan Turing wrote a paper on Computing Machinery and Intelligence in 1950, but it was six years later at a scientific conference at Dartmouth that the term ‘artificial intelligence’ was coined. It’s a broad meaning? A computer or system that can mimic human intelligence.

The optimistic pioneers predicted the creation of AI within a generation, but progress was uneven amid limited computational power and funding. However, these obstacles have now largely been overcome, and suddenly AI seems to be everywhere: driving our cars, answering our queries, making our homes smarter. In fact, can you even be completely sure this article was written by a human…?

When we talk about the AI in use today, it’s not the full emulation of general human-like thinking and capabilities that we’ve all seen in science fiction movies. We’re not quite there, yet. But we are at a stage where businesses across almost all sectors are adopting AI to improve performance in specific areas (also known as ‘narrow AI’). We’ve even reached the stage where those that don’t adopt AI risk losing a competitive edge. For example, the latest McKinsey Global AI Survey (November 2019) found that the use of AI in standard business practices rose by 25% compared to the previous year, with 63% of respondents reporting that it increased revenues and 44% saying it had cut costs.

AI vs. Machine Learning

ai

A recent study - The State of AI 2019 by UK venture capitalist firm MMC - found that some 40% of European startups that claimed to use AI were, in fact, not. The issue is that businesses tend to use buzzwords liberally, without always being clear on what exactly they mean.

As mentioned, AI is defined as a broad scientific field concerned with the creation of machines that can learn and solve problems like intelligent beings (humans). AI is often used interchangeably with other fashionable concepts like machine learning, but despite some obvious overlaps, there are important distinctions between the two. Machine learning is the ability of computer systems to automatically learn and improve from experience without being explicitly programmed. It is a subset of AI and underpins a lot of the recent advances in AI (it is sometimes called “the part of AI that works”). But it’s important to remember that while all machine learning is a form of AI, not all AI involves machine learning.

Machine learning is used extensively today by data scientists as they work to extract value and insights from ever-larger volumes of data. Instead of relying on a set of fixed rules indefinitely, machine learning algorithms will continuously adapt to new data and discoveries to solve more complex problems without needing to be retrained by humans. In other words, they will improve with experience. One common example we’ll all have noticed during the recent lockdown is the recommendations offered by video or music streaming platforms. And it works: Netflix estimates that its recommendation engine influences around 80% of the content streamed by its users.

Core Types of Machine Learning

algorithm

There are three main types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning.

Supervised learning is the most common, and involves teaching an algorithm a mapping function between established input data and known output variables with increasing accuracy. A simple example is a spam filter for emails: after being ‘trained’ with examples of emails labeled as ‘spam’ or ham mails, a machine learning algorithm will start to filter new emails and then continue to learn from each new input. Other common applications today include regression models for predicting house prices or forecasting the weather.

Unsupervised learning mainly involves pattern detection and descriptive modeling in unlabeled datasets with no known output values. In other words, when we don’t know what we’re looking for. Processes include clustering data inputs into groups based on certain common features (e.g. for customer segmentation) or detecting anomalies in the data set (e.g. for spotting fraudulent credit card transactions).

Reinforcement learning is a higher level of machine learning and involves empowering machines to identify the ideal behavior in any specific context (or “current state”). The learning typically occurs through trial and error, with the machine building up knowledge of the optimal action to perform at each given moment. One example would be an algorithm playing a video game and gradually, over time, working out how to maximize its score. This method was used by Google’s Deep Mind with its AlphaZero algorithm, which within hours had taught itself to become a global master of games like Chess and Go despite being given no prior knowledge beyond the rules of each game.

We also have to mention deep learning, which is a cutting-edge subset of machine learning. It is based on multi-layered, artificial neural networks that aim to mimic human brains. With deep learning, the algorithm is given raw input data and will automatically decide for itself what the relevant features are to be able to classify output variables. It requires huge volumes of data for learning to be effective but is key to advances in areas such as self-driving vehicles (e.g. learning to spot traffic lights and pedestrians) and real-time translation.

AI & ML in Software Development dev-ai

Unsurprisingly, AI and machine learning are also having a big impact on software development. There is already some concern that algorithms could eventually replace developers, but that’s not likely to happen any time soon. For the foreseeable future, we should be looking at how AI and machine learning can assist software development by increasing efficiency or opening up new possibilities. Here are just three of the applications of machine learning (and AI) in software development today:

Assisted coding: Algorithms are built through coding, but now they can help make coding easier for developers. There are c code completion tools, for example, that predict the next elements of code in a similar way that smartphones use predictive text and Gmail uses Smart Compose. This can dramatically reduce the number of repetitive keystrokes that developers type, while also lowering the risks of typos.

Bug fixing: Other code review algorithms can detect and fix bugs automatically by learning about common mistakes and their variants. This can avoid more costly and time-consuming bug fixes later in the software development life cycle.

Enhanced QA: Machine learning algorithms can also develop more thorough and efficient quality assurance (QA) testing by enhancing test coverage, automatically adjusting to valid changes, detecting anomalies, and, over time, accurately predicting when problems are most likely to occur.

--

If you want to stay up to date with all the new content we publish on our blog, share your email and hit the subscribe button.

Also, feel free to browse through the other sections of the blog where you can find many other amazing articles on: Programming, IT, Outsourcing, and even Management.

jobsity cta
Share
linkedIn icon
Written by Santiago Castro
LinkedIn

With over +16 years of experience in the technology and software industry and +12 of those years at Jobsity, Santi has performed a variety of roles including UX/UI web designer, senior front-end developer, technical project manager, and account manager. Wearing all of these hats has provided him with a wide range of expertise and the ability to manage teams, create solutions, and understand industry needs. At present, he runs the Operations Department at Jobsity, creating a high-level strategy for the company's success and leading a team of more than 400 professionals in their work on major projects.