##### Video reading area

Origin Ten Technology is driven by a desire to make the world a better place.

This area has been designed to improve Video accessibility for user with hearing problems and for those who just prefer to read.

# Intro to Algorithms: Crash Course Computer Science #13 By CrashCourse

Hi, I’m Carrie Ann. Welcome to crash course. Computer science over the past two episodes. We got our first taste of programming in a high level language, like, python or Java. We talked about different types of programming language statements, like assignments ifs and Loops as well as putting statements into functions that perform a computation like calculating an exponent importantly, the function, we wrote to calculate exponents is only one possible solution. There are other ways to write this function using different statements in different orders that achieve exactly the same numerical.

Call result. The difference between them is the algorithm. That is the specific steps used to compute the computation. Some algorithms are better than others. Even if they produce equal results. Generally the fewer steps. It takes to compute the better. It is though. Sometimes we care about other factors like how much memory uses the term algorithm comes from Persian polymath, Muhammad IBN Musa. Alcaraz me, who was one of the fathers of algebra more than a millennium ago. The crafting of efficient, algorithms are problem that existed long before modern computers led to a whole science surrounding computation, which evolved into the modern

Spleen of you guessed it, computer science.

One of the most storied algorithmic problems in all of Computer Sciences sorting as insulting names or sorting numbers, computer, saw all the time, looking for the cheapest, airfare, arranging your email, by most recently sent or scrolling your contacts by last name. Those all require sorting. You might think sorting isn’t so tough. How many algorithms can there possibly be? The answer is a lot. Computer scientists have spent decades inventing, algorithms for sorting. We’ve core names like, bubble sort and spaghetti sauce. Let’s try sorting. Imagine. We have a set of airfare prices.

Indianapolis, we’ll talk about how data like this is represented in memory next week, but for now, a series of items like this is called an array. Let’s take a look at these numbers to help see how we might sort this programmatically. We’ll start with a simple algorithm first. Let’s scan down the array to find the smallest. Number starting at the top with 307. It’s the only number we’ve seen, so it’s also the smallest. The next is 239. That’s smaller than 307, so it becomes our new smallest. Number. Next is 214, our new smallest.

Number 250, S. Not neither is 384 299 223 or 312. So we’re finished scanning all numbers and 214 is the smallest to put this into ascending order. We swap 214 with the number in the top location. Great. We sorted one number. Now, we repeat the same procedure, but instead of starting at the top we can start one spot below. First. We see 239, which we save is our new smallest number scanning, the rest of the array we find 223 is the

Smallest. So we swap this with the number in the second spot. Now we repeat again, starting from the third number down. This time. We swap 239 with 307, this process continues until we get to the very last number and voila, the array sorted, and you’re ready to book that flight to Indianapolis the process. We just walk through is one way or one algorithm for sorting an array. It’s called selection saw and it’s pretty basic. Here’s the pseudocode. This function can be used to sort eight eighty or eighty Million numbers. And once you’ve written

in the function, you can use it over and over again, with this sort algorithm. We Loop through each position in the array, from top to bottom. And then, for each of those positions, we have to Loop through the array, to find the smallest number to swap. You can see this in the code, where one for Loop is nested inside, another for Loop. This means very roughly that if we want to sort n items, we have to leave end times inside of which we Loop end times for a grand total of roughly, end times and Loops or N squared, this relationship of input size to the number of steps. The algorithm takes to run, characterizes the

City of the selection sort algorithm. It gives you an approximation of how fast or slow and algorithm is going to be computer. Science is right. This order of growth in something known as no joke. Big O notation. N squared is not particularly efficient. Our example, array had n equals eight items and 8 squared is 64. If we increase the size of our array, from eight items to 80 the running time is now a t squared, which is 6400. So, although our re only grew by 10 times, from 8, to 80 the running.

I’m increase by 100 times from 64 to 6400, this effect magnifies as the array gets larger. That’s a big problem for a company like Google, which has to sort of raised with millions or billions of entries. So you might ask as a burgeoning computer scientist. Is there a more efficient sorting algorithm? Let’s go back to our old unsorted array and try different algorithm mergesort. The first thing merge sort does is check the size of the array is greater than 1. If it is, it’s busy array into two halves since our a size 8 gets split into

Two arrays of size for. These are still bigger than sites one. So they get split again into arrays of size 2. And finally, they split into eight arrays with one item in each. Now, we are ready to merge which is how mergesort gets its name. Starting with the first two arrays. We read the first and only value in them. In this case, 307 and 239. 239 is smaller. So we take that Value First. The only number left is 307. So we put that value s we’ve successfully merged two arrays. We now repeat this process for the remaining.

A putting the meat in sorted order, then the merge process repeats. Again. We take the first two arrays, and we compare the first numbers in them. This time. It’s 239 and 214. 214 is lowest. So, we take that number first. Now, we look again at the first, two numbers in both arrays, 239 and 250. 239 is lower. So we take that number next. Now. We look at the next two numbers, 307 and 250, 250 is lower. So we take that. Finally, we left.

Just 307 so that gets added lost. In every case. We start with two arrays, each individually sorted, and merged them into a larger sorted array. We repeat the exact same merging process for the two remaining arrays of size 2. Now. We have two sorted arrays of size 4 just. As before we merge comparing the first, two numbers in each array and taking the lowest we repeat this until all the numbers emerged and then our raised fully sorted again. The bad news is no matter how many times we sought these, you’re still going to have to pay two hundred and fourteen dollars to get to India.

Annapolis. Anyway, the big old computational complexity of merge sort is n times the log of n. The end comes from the number of times. We need to compare emerge items, which is directly proportional to the number of items in the array. The log n comes from the number of merge steps. In our example, we broke our array of eight items into 4, then 2. And finally, one that’s three splits splitting in half repeatedly, like this has a logarithmic relationship with the number of items. Trust me, log base 2 of 8 equals 3 splits if we double the size of our rate of 16, that’s twice as many items.

The salt. It only increases the number of split steps by one since log, base, 2 of 16, equals 4, even if we increase the size of array, more than a thousand times from eight items to eight thousand items. The number of split step stays pretty low, log base, two of eight thousand is roughly 13, that’s more but not much more than three about four times larger and yet we’re sorting a lot more numbers for. This reason, merge sort is much more efficient than selection sort. And now I can put my ceramic cat collection in nine water, much faster. There are literally dozens of

Sorting algorithms we could review but instead I want to move on to my other favorite category of classic algorithmic problems Graph Search. A graph is a network of nodes connected by lines. You can think of it, like a map with cities and Roads, connecting them Reese between these cities take different amounts of time. We can label each line with what is called a cost or wait. In this case. It’s weeks of travel. Now, let’s say we want to find the fastest route for an army at highgarden to reach the castle of Winterfell. The simplest approach would just be to try every single path. Exhaustively and calculate the

Costa Beach, that’s a Brute Force approach. We could have used a Brute Force approach in sorting by systematically trying every permutation of the array, to check if it’s sorted. This would have an N. Factorial complexity. That is the number of nodes x 1, less x, 1 less than that. And so on until one, which is way worse than even in squared, but we can be way more clever. The classic algorithmic solution to. This graph problem was invented by one of the greatest Minds in computer science, practice and Theory edsger dijkstra. So it’s appropriately named.

Dijkstra’s algorithm, we start in highgarden with a cost of zero, which we mark inside the node for now. We’ll mark all other cities with question marks. Its we don’t know the cost of getting to them yet. Dijkstra’s algorithm always starts with the node with the lowest cost. In this case. It only knows about OneNote highgarden. So it starts there. It follows all paths from that node to all connecting nodes. There are one step away and Records, the cost to get to each of them. That completes one round of the algorithm. We haven’t encountered Winterfell yet. So we Loop and run dijkstra’s algorithm again with highgarden.

He checked the next lowest cost node is King’s Landing. Just as before. We follow every unvisited line to any connecting cities. The line to the Trident has a cost of 5. However, we want to keep a running cost from highgarden. So the total cost of getting to the Trident is eight plus five, which is 13 weeks. Now, we follow the off-road path to riverrun which has a high cost of 25 for a total of 33, but we can see inside of riverrun that we’ve already found a path of a lower cost of just 10. So we disregard our new path and stick with the previous better path. We’ve now expanded

Board every line from King’s Landing and didn’t find Winterfell. So we move on the next lowest cost node is riverrunner 10 weeks first. We check the path to the Trident, which has a total cost of 10 plus 2 or 12. Let’s slightly better than previous path. We found which had a cost of 13. So we update the path and cost to the Trident. There is also a line from riverrun to Pike with a cost of 3 10 plus 3 is 13 which beats the previous cost of 14. And so we update Pikes path and cost as well. That’s all part from riverrun checked. So you guessed it dijkstra’s.

Rhythm Loops again, the node with the next lowest cost is the Trident and the only line from the tried and that we haven’t checked is a path to Winterfell. It has a cost of 10 plus we need to add in the cost of 12. It takes to get to the Trident for a grand total cost of 22. We check our last path from Pike to Winterfell which sums to 31. Now. We know the lowest total cost and also the fastest route for the Army to get there which avoids King’s Landing dijkstra’s. Original algorithm conceived in 1956. Had a complexity of the number of nodes in the graph squared and square.

And as we’ve already discussed, is never great because it means the algorithm can’t scale two big problems like the entire road map of the United States. Fortunately dijkstra’s algorithm was improved a few years later to take the number of nodes in the graph times, the log of the number of nodes plus the number of lines. Although this looks more complicated is actually quite a bit faster. Plugging in our example, graph with six cities and nine lines, proves it our algorithm drops, from thirty six Loops to around 14. As with sorting. There are innumerable Graph, Search algorithms with different pros and cons,

Every time you use a service like Google Maps to find directions, and algorithm much like dijkstra’s is running on servers, to figure out the best route for you. Algorithms are everywhere. The modern world would not be possible without them. We touched. Only the very tip of the algorithmic iceberg in this episode, but essential part of being a computer scientist is leveraging, existing, algorithms, and writing new ones when needed. And I hope this little taste is intrigued you to search further. I’ll see you next week. Crash course. Computer science is produced in association with PBS digital Studios.

At their Channel, you can check out a playlist of shows, like PBS idea Channel physics girl, and it’s okay to be smart. This episode was filmed at the Chad and Stacey emigholz studio in Indianapolis, Indiana, and it was made with the help of all these nice people and how wonderful Graphics team is. Thought Cafe. Thanks for watching. I’ll see you later.