Quadratic programming in plain English No ratings yet.

The big picture is:  a quadratic programming problem can be reduced to be a linear programming problem. Here is how: (1) KTT conditions For any non-linear programming: max: f(x),  s.t: g(x) <=0 It has been proved that it needs to meet Karush–Kuhn–Tucker (KKT) conditions provided that some regularity conditions are satisfied how it is being proved? it is • Read More »


socat introduction No ratings yet.

While searching a command line to write something into a UNIX Domain Socket, I found this great tool called socat which is similar to netcat/samplicator but has more features. The man page said: Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can • Read More »


Linear programming in plain English No ratings yet.

Why study the linear programming (LP) ? LP has a lot of use cases, one of them is the SVM ( support vector machine). The SVM ‘s Lagrangian dual can give the lower bound of SVM,  this Lagrangian dual can be solved by quadratic programming. The KKT conditions of this quadratic programming can be solved by • Read More »


GDB cheatsheat No ratings yet.

GDB cheatsheat There is a good gdb cheatsheet at http://darkdust.net/files/GDB%20Cheat%20Sheet.pdf The output format for print, x is missing  there, somehow I think it is useful to add the output formats: x,d,u,o,t,a,f,s,z,r please see https://sourceware.org/gdb/onlinedocs/gdb/Output-Formats.html  How do we usually run gdb commands normally we compile the source code with  CPPFLAG -g ( please no -O, otherwise some code are • Read More »


人生最曼妙的风景,竟是内心的淡定与从容 No ratings yet.

有人说《一百岁感言》并非杨绛先生本人所作,  咱们中国的网络真的是真假难辩,难道这也需要方舟子打假? 不管怎样,出于敬重缅怀, 摘录此文以之怀念杨绛先生 杨绛: 我今年一百岁,已经走到了人生的边缘,我无法确知自己还能走多远,寿命是不由自主的,但我很清楚我快“回家”了。 我得洗净这一百年沾染的污秽回家。 我没有“登泰山而小天下”之感,只在自己的小天地里过平静的生活。细想至此,我心静如水,我该平和地迎接每一天,准备回家。 在这物欲横流的人世间,人生一世实在是够苦。你存心做一个与世无争的老实人吧,人家就利用你欺侮你。你稍有才德品貌,人家就嫉妒你排挤你。 你大度退让,人家就侵犯你损害你。你要不与人争,就得与世无求,同时还要维持实力准备斗争。你要和别人和平共处,就先得和他们周旋,还得准备随时吃亏。 少年贪玩,青年迷恋爱情,壮年汲汲于成名成家,暮年自安于自欺欺人。 人寿几何,顽铁能炼成的精金,能有多少?但不同程度的锻炼,必有不同程度的成绩;不同程度的纵欲放肆,必积下不同程度的顽劣。 上苍不会让所有幸福集中到某个人身上,得到爱情未必拥有金钱;拥有金钱未必得到快乐;得到快乐未必拥有健康;拥有健康未必一切都会如愿以偿。 保持知足常乐的心态才是淬炼心智,净化心灵的最佳途径。一切快乐的享受都属于精神,这种快乐把忍受变为享受,是精神对于物质的胜利,这便是人生哲学。 一个人经过不同程度的锻炼,就获得不同程度的修养、不同程度的效益。好比香料,捣得愈碎,磨得愈细,香得愈浓烈。我们曾如此渴望命运的波澜,到最后才发现:人生最曼妙的风景,竟是内心的淡定与从容……我们曾如此期盼外界的认可,到最后才知道:世界是自己的,与他人毫无关系。 References: http://news.wenweipo.com/2016/05/25/IN1605250061.htm http://newspaper.jfdaily.com/xwcb/files/20130718/322242.PDF http://zj.zjol.com.cn/news/350115.html Please rate this rating



Bayes classifier in plain english No ratings yet.

Bayes theorem: where A and B are events and P(B) ≠ 0. P(A) and P(B) are the probabilities of observing A and B without regard to each other. P(A | B), a conditional probability, is the probability of observing event A given that B is true. P(B | A) is the probability of observing event B given that A • Read More »


Decision tree learning in plain english No ratings yet.

Decision tree works just like computer language if. In AI/ML world, the problem is usually like this: Given training set with features [( f1,f2 ….), ….] and known category/label [c1, ….], how can we learn from this training set/data and design a decision tree , so that for any new data, we can predict which • Read More »


std::unique_lock vs std::lock_guard No ratings yet.

The std::unique_lock class is a lot more flexible when dealing with mutex locks. It has the same interface as std::lock_guard but provides additional methods for explicitly locking and unlocking mutexes and deferring locking on construction As a general rule, std::lock_guard should be preferably used when the additional features of std::unique_lock are not needed use cases: • Read More »


k-Nearest Neighbors in plain English No ratings yet.

Here is how it works in plain English: we have training set ( known features ( normalized), and classification) : many data points: [ ( feature1,feature2,feature 3,…), ( f1,f2,f3 …), ….] and corresponding labels/classification: [category1, 2, …] for any new data point  t calculate the distance between this  t to each of the training set • Read More »