Master DSA“有资格”获得向 S/w Ers 提供的高薪。
DSA 是软件工程的主要部分。
在编写代码之前,请确保您了解大局,然后深入了解细节。
这一切都是为了直观地理解概念,然后通过任何 l/g 将这些概念翻译成代码,因为 DSA 与语言无关。
每个即将到来的概念都以某种方式与之前的概念相关联。因此,除非您通过练习彻底掌握了这个概念,否则不要跳话题或继续前进。
当我们直观地学习概念时,我们会对材料有更深入的理解,从而帮助我们更长时间地保留知识。
如果您遵循这些建议,您将不会有任何损失。
Linear DS: Arrays LinkedList(LL) & Doubly LL (DLL) Stack Queue & Circular Queue Non-linear DS: Trees Graphs
理解这种表示法对于算法的性能比较至关重要。
这是一种比较算法效率的数学方法。
代码运行得越快,它就会越低
V. 大多数采访的印象。
由于存储成本低,与时间复杂度相比很少被考虑。
需要理解,因为面试官也可能会问你这个。
从技术上讲,平均情况 Big-O 不存在最佳情况。它们分别用 omega 和 theta 表示。
我们总是在衡量最坏的情况。
## O(n): Efficient Code Proportional Its simplified by dropping the constant values. An operation happens 'n' times, where n is passed as an argument as shown below. Always going to be a straight line having slope 1, as no of operations is proportional to n. X axis - value of n. Y axis - no of operations // O(n) function printItems(n){ for(let i=1; i<=n; i++){ console.log(i); } } printItems(9); // O(n) + O(n) i.e O(2n) operations. As we drop constants, it eventually becomes O(n) function printItems(n){ for(let i=0; i<n; i++){ console.log(i); } for(let j=0; j<n; j++){ console.log(j); } } printItems(10);
## O(n^2): Nested loops. No of items which are output in this case are n*n for a 'n' input. function printItems(n){ for(let i=0; i<n; i++){ console.log('\n'); for(let j=0; j<n; j++){ console.log(i, j); } } } printItems(4);
## O(n^3): No of items which are output in this case are n*n*n for a 'n' input. // O(n*n*n) function printItems(n){ for(let i=0; i<n; i++){ console.log(`Outer Iteration ${i}`); for(let j=0; j<n; j++){ console.log(` Mid Iteration ${j}`); for(let k=0; k<n; k++){ //console.log("Inner"); console.log(` Inner Iteration ${i} ${j} ${k}`); } } } } printItems(3); ## Comparison of Time Complexity: O(n) > O(n*n) ## Drop non-dominants: function xxx(){ // O(n*n) Nested for loop // O(n) Single for loop } Complexity for the below code will O(n*n) + O(n) By dropping non-dominants, it will become O(n*n) As O(n) will be negligible as the n value grows. O(n*n) is dominant term, O(n) is non-dominnat term here.
## O(1): Referred as Constant time i.e No of operations do not change as 'n' changes. Single operation irrespective of no of operands. MOST EFFICIENT. Nothing is more efficient than this. Its a flat line overlapping x-axis on graph. // O(1) function printItems(n){ return n+n+n+n; } printItems(3); ## Comparison of Time Complexity: O(1) > O(n) > O(n*n)
## O(log n) Divide and conquer technique. Partitioning into halves until goal is achieved. log(base2) of 8 = 3 i.e we are basically saying 2 to what power is 8. That power denotes the no of operations to get to the result. Also, to put it in another way we can say how many times we need to divide 8 into halves(this makes base 2 for logarithmic operation) to get to the single resulting target item which is 3. Ex. Amazing application is say for a 1,000,000,000 array size, how many times we need to cut to get to the target item. log(base 2) 1,000,000,000 = 31 times i.e 2^31 will make us reach the target item. Hence, if we do the search in linear fashion then we need to scan for billion items in the array. But if we use divide & conquer approach, we can find it in just 31 steps. This is the immense power of O(log n) ## Comparison of Time Complexity: O(1) > O(log n) > O(n) > O(n*n) Best is O(1) or O(log n) Acceptable is O(n)
O(n log n) : Used in some sorting Algos. Most efficient sorting algo we can make unless we are sorting only nums.
Tricky Interview Ques: Different Terms for Inputs. function printItems(a,b){ // O(a) for(let i=0; i<a; i++){ console.log(i); } // O(b) for(let j=0; j<b; j++){ console.log(j); } } printItems(3,5); O(a) + O(b) we can't have both variables equal to 'n'. Suppose a is 1 and b is 1bn. Then both will be very different. Hence, it will eventually be O(a + b) is what can call it. Similarly if these were nested for loops, then it will become O(a * b)
## Arrays No reindexing is required in arrays for push-pop operations. Hence both are O(1). Adding-Removing from end in array is O(1) Reindexing is required in arrays for shift-unshift operations. Hence, both are O(n) operations, where n is no of items in the array. Adding-Removing from front in array is O(n) Inserting anywhere in array except start and end positions: myArr.splice(indexForOperation, itemsToBeRemoved, ContentTobeInsterted) Remaining array after the items has to be reindexed. Hence, it will be O(n) and not O(0.5 n) as Big-O always meassures worst case, and not avg case. 0.5 is constant, hence its droppped. Same is applicable for removing an item from an array also as the items after it has to be reindexed. Finding an item in an array: if its by value: O(n) if its by index: O(1) Select a DS based on the use-case. For index based, array will be a great choice. If a lot of insertion-deletion is perform in the begin, then use some other DS as reindexing will make it slow.
O(1) = 1
O(log 100) = 7
O(100) = 100
O(n^2) = 10,000
O(1) = 1
O(log 1000) = ~10
O(1000) = 1000
O(1000*1000) = 1,000,000
我们主要关注这4个:
大 O(n*n):嵌套循环
大 O(n):比例
Big O(log n):分而治之
大 O(1):常数
O(n!) 通常发生在我们故意编写糟糕的代码时。
O(n*n) 是可怕的算法
O(n log n) 是可以接受的,并被某些排序算法使用
O(n) :可接受
O(log n), O(1) :最佳
所有 DS 的空间复杂度几乎相同,即 O(n)。
使用排序算法,空间复杂度从 O(n) 到 O(log n) 或 O(1) 不等
时间复杂度根据算法而变化
除数字(如字符串)之外的排序的最佳时间复杂度是 O(n log n),即快速排序、合并排序、时间排序、堆排序。
根据每个 DS 的优缺点来选择在哪个问题陈述中选择哪个 DS。
更多信息请参考:bigocheatsheet.com
以上是DSA 和 Big O 表示法简介的详细内容。更多信息请关注PHP中文网其他相关文章!