Dear Hub, Thank you very much for your kindly reply. I posted the whole code modified based on your original code. It seems that the problem is solved (there are some problems in compare functions before). However, the code is somewhat slow when the number of elements is huge (more than one million). Could you please help me to take a look at it and give me some suggestion to improve the code? Thanks, TangLaoya

/// reference: https://kukuruku.co/hub/cpp/avl-trees
extern int avlcompare(void *item1, void *item2);
extern void externopt(void *t);
extern void externout(void *t);
#include "avl.h"
unsigned char height(node* p)
{
return p?p->height:0;
}
int bfactor(node* p)
{
return height(p->right)-height(p->left);
}
void fixheight(node* p)
{
unsigned char hl = height(p->left);
unsigned char hr = height(p->right);
p->height = (hl>hr?hl:hr)+1;
}
node* rotateright(node* p)
{
node* q = p->left;
p->left = q->right;
q->right = p;
fixheight(p);
fixheight(q);
return q;
}
node* rotateleft(node* q)
{
node* p = q->right;
q->right = p->left;
p->left = q;
fixheight(q);
fixheight(p);
return p;
}
node* balance(node* p) // balancing the p node
{
fixheight(p);
if( bfactor(p)==2 )
{
if( bfactor(p->right) < 0 )
p->right = rotateright(p->right);
return rotateleft(p);
}
if( bfactor(p)==-2 )
{
if( bfactor(p->left) > 0 )
p->left = rotateleft(p->left);
return rotateright(p);
}
return p; // balancing is not required
}
node* insert(node* p, void* k) // insert k key in a tree with p root
{
if( !p ) return new node(k);
//if( kkey )
if (avlcompare(k, p->key)<0)
p->left = insert(p->left,k);
else
p->right = insert(p->right,k);
return balance(p);
}
node* findmin(node* p) // find a node with minimal key in a p tree
{
return p->left?findmin(p->left):p;
}
node* removemin(node* p) // deleting a node with minimal key from a p tree
{
if( p->left==0 )
return p->right;
p->left = removemin(p->left);
return balance(p);
}
node* findmax(node* p) // find a node with maximal key in a p tree
{
return p->right?findmax(p->right):p;
}
void * popmax(node* p)
{
node* max=findmax(p);
void *key=max->key;
remove(p,key);
return key;
}
node* remove(node* p, void* k) // deleting k key from p tree
{
if( !p ) return 0;
int comp=avlcompare(k, p->key);
//if( k < p->key )
if( comp < 0 )
p->left = remove(p->left,k);
//else if( k > p->key )
else if( comp > 0 )
p->right = remove(p->right,k);
else // k == p->key
{
node* q = p->left;
node* r = p->right;
delete p;
if( !r ) return q;
node* min = findmin®;
min->right = removemin®;
min->left = q;
return balance(min);
}
return balance(p);
}
void in_order_traversal(node *p)
{
if(p)
{
in_order_traversal(p->left);
externout(p->key);
in_order_traversal(p->right);
}
}
void pre_order_traversal(node *p)
{
if(p)
{
externout(p->key);
pre_order_traversal(p->left);
pre_order_traversal(p->right);
}
}
void deletetree(node *p)
{
if(p)
{
deletetree(p->left);
deletetree(p->right);
externopt(p->key);
delete(p);
}
p=0;
}

Thank you very much for providing so powerful code. I have a question to use your code.
After inserted all elements, I need to pop the maximal element (return element data and then delete the node) every time (also insert some new elements). However, my test shows that every time the elements are deleted, the elements are not changed when output the whole tree (by in_order_traversal). Could you please help me to take a look at it?
Thanks,Tang Laoya
node* findmax(node* p) // find a node with maximal key in a p tree
{
return p->right?findmax(p->right):p;
}
void * popmax(node* p)
{
node* max=findmax(p);
void *key=max->key;
remove(p,key);
return key;
}
void in_order_traversal(node *p)
{
if(p)
{
in_order_traversal(p->left);
cout << p->key << '\n' ;
in_order_traversal(p->right);
}
}

Dear Hub,Thank you very much for providing so powerful code. I have a question to use your code.After inserted all elements, I need to pop the maximal element (return element data and then delete the node) every time (also insert some new elements). However, my test shows that every time the elements are deleted, the elements are not changed when output the whole tree (by in_order_traversal). Could you please help me to take a look at it?Thanks,Tang LaoyaPS:the code I added:node* findmax(node* p) // find a node with maximal key in a p tree { return p->right?findmax(p->right):p;}void * popmax(node* p){ node* max=findmax(p); void *key=max->key; remove(p,key); return key;}void in_order_traversal(node *p){ if(p) { in_order_traversal(p->left); cout << p->key << '\n'; in_order_traversal(p->right); }}

Dear Hub,Thank you very much for providing so powerful code. I have a question to use your code.After inserted all elements, I need to pop the maximal element (return element data and then delete the node) every time (also insert some new elements). However, my test shows that every time the elements are deleted, the elements are not changed when output the whole tree (by in_order_traversal). Could you please help me to take a look at it?Thanks,Tang LaoyaPS:the code I added:node* findmax(node* p) // find a node with maximal key in a p tree { return p->right?findmax(p->right):p;}void * popmax(node* p){ node* max=findmax(p); void *key=max->key; remove(p,key); return key;}void in_order_traversal(node *p){ if(p) { in_order_traversal(p->left); cout << p->key << '\n'; in_order_traversal(p->right); }}

I have done some work on image classification and I came across a similar method for classification of flowers that might be of interest to you. It was quite successful on a very large number of flower species in various settings. I think you are on the right track on object classification and improving certain bits could really improve the flexibility of the algorithm.

In that particular paper circular patches represented with SIFT descriptors are used as 'codewords' and clustered to build a vocabulary/dictionary. Images are then represented using that vocabulary. The fact that the patches are circular rather than square makes them rotationally invariant. Furthermore, SIFT descriptors are one of the state-of-the-art methods of describing shape and are scale-invariant. This allows a more descriptive vocabulary to built.

At this stage, you basically have a set of images described by their most prominent shape features in the form of feature vectors. You can then use any algorithms to classify those. A good choice is SVM as it weighs the features based on how much they affect the decision boundary between classes. Thus, if you investigate your top support vectors, you could find not necessarily the common characteristics in a particular class, but rather, the characteristics that differentiate that class from all the other classes. I can see that the DictionaryLearning algorithms also tries to optimize the features used so that could be a good choice too.

This method could improve results in difficult cases where there is large intra-class variance, as you described. The complex descriptors would make the algorithm more robust and accurate in general.

Akka is really great! I use it with Scala and it's pretty cool. I haven't had a chance to write any articles on Kukuruku (btw, I like this project a lot) yet, but will see, maybe akka actors in Scala could be a good start :)

The \b is a word boundary modifier, which means that it matches before and after an alphanumeric sequence. To be honest, this regular expression does not represent IP address as per RFC. For instance, this rule [0-9]{1,3} will also match 745, which is far over 255. But usually you do not care too much, as you can visually distinguish between real IP address and something like 745.983.001.874.

Interesting article, I learnt some stuff about grep! Thanks, anyways I think you could have explained a bit more on some topics, like saying «That works but this is better», why is it better?Cheers.

CSSComb.js is basically a new version of CSSComb, which has much more features. It can also «fix» your CSS, not only sort. In addition, it was written with Node.js, which means that engineers have much more chances to use it in their projects (comparing to an old, PHP-based, CSSComb)

I use CSSComb as part of my workflow. I try and stick to the order as much as possible and allow csscomb to reformat when I miss it.I also have 2 tasks. One that runs csscomb everytime my sass is compiled. This makes sure my,CSS output us always clean.The other task reorganizes the properties in my .scss files. This I run manually whenever required. It's annoying to work on a file and have it rearrange properties when you hit savePersonally gulp worked faster and better than grunt. Esp when running only for files that changed. Would love your opinion on thatAlso, is this project replacing the old one or is it just a better fork? Great work..thanks for the write up :)

I tried Grunt, I didn't really use it much as to make a fair comparison but personally Gulp feels much easier to use, Grunt is just a bunch of options which can be intimidating and quite hard to understand at first. Gulp on the other hand is lightweight, and it's just Javascript, it feels like I can get more done with fewer lines, even though that might not be true. It also feels more powerful as you can write conditionals to run one part of the build process or the other in plain Javascript. That's just my opinion though, remember I haven't used Grunt much, only for a few personal frontend projects.

In that particular paper circular patches represented with SIFT descriptors are used as 'codewords' and clustered to build a vocabulary/dictionary. Images are then represented using that vocabulary. The fact that the patches are circular rather than square makes them rotationally invariant. Furthermore, SIFT descriptors are one of the state-of-the-art methods of describing shape and are scale-invariant. This allows a more descriptive vocabulary to built.

At this stage, you basically have a set of images described by their most prominent shape features in the form of feature vectors. You can then use any algorithms to classify those. A good choice is SVM as it weighs the features based on how much they affect the decision boundary between classes. Thus, if you investigate your top support vectors, you could find not necessarily the common characteristics in a particular class, but rather, the characteristics that differentiate that class from all the other classes. I can see that the DictionaryLearning algorithms also tries to optimize the features used so that could be a good choice too.

This method could improve results in difficult cases where there is large intra-class variance, as you described. The complex descriptors would make the algorithm more robust and accurate in general.

Does that mean if hl is greater than hr, then use hl, otherwise use hr?

\bis a word boundary modifier, which means that it matches before and after an alphanumeric sequence. To be honest, this regular expression does not represent IP address as per RFC. For instance, this rule [0-9]{1,3} will also match 745, which is far over 255. But usually you do not care too much, as you can visually distinguish between real IP address and something like 745.983.001.874.