多標籤分類
1. 演算法
多標籤分類的適用場景較為常見,比如,一份歌單可能既屬於標籤旅行也屬於標籤駕車。有別於多分類分類,多標籤分類中每個標籤不是互斥的。多標籤分類演算法大概有兩類流派:
One-vs-Rest
基本思想:為每一個標籤 \(y_i\) 構造一個二分類器,正樣本為含有標籤 \(y_i\) 的例項,負樣本為不含有標籤 \(y_i\) 的例項;最後組合多個二分類器擷取閾值(大於即打上該標籤)。我實現一個Spark版本MultiLabelOneVsRest,部分訓練程式碼如下:
override def fit(dataset: Dataset[_]): OneVsRestModel = { val labelSchema = dataset.schema($(labelCol)) val labelsContains: Int => UserDefinedFunction = (index: Int) => { udf { labels: Seq[Double] => labels.toSet .contains(index.toDouble) } } val maxLabelUDF = udf { labels: Seq[Double] => labels.max } val numClasses = dataset.agg(max( maxLabelUDF(col($(labelCol))).cast(DoubleType))) .head() .getAs[Double](0) .toInt }
AdaBoost-MH
AdaBoost-MH演算法是由Schapire(AdaBoost演算法作者)與Singer提出,基本思想與AdaBoost演算法類似:自適應地調整樣本-類別的分佈權重。對於訓練樣本 \(\langle (x_1, Y_1), \cdots, (x_m, Y_m) \rangle\) ,任意一個例項 \(x_i \in \mathcal{X}\) ,標籤類別 \(Y_i \subseteq \mathcal{Y}\) ,演算法流程如下:
其中, \(D_t(i, \ell)\) 表示在t次迭代例項 \(x_i\) 對應標籤 \(\ell\) 的權重, \(Y[\ell]\) 標識標籤 \(\ell\) 是否屬於例項 \((x, Y)\) ,若屬於則為+1,反之為-1(增加樣本標籤的權重);即
\[ Y[\ell] = \left \{ { \matrix { {+1} & {\ell \in Y} \cr {-1} & {\ell \notin Y} \cr } } \right. \]
\(Z_t\) 為每一次迭代的歸一化因子,保證權重分佈矩陣 \(D\) 的所有權重之和為1,
\[ Z_t = \sum_{i=1}^{m} \sum_{\ell \in \mathcal{Y}} D_{t}(i, \ell) \exp \large{(}-\alpha_{t} Y_i[\ell] h_t(x_i, \ell) \large{)} \]
ML-KNN
ML-KNN (multi-label K nearest neighbor)基於KNN演算法,已知K近鄰的標籤資訊,通過最大後驗概率(Maximum A Posteriori)估計例項 \(t\) 是否應打上標籤 \(\ell\) ,
\[ y_t(\ell) = \mathop{ \arg \max}_{b \in \{0,1\}} P(H_b^{\ell} | E_{C_t(\ell)}^{\ell} ) \]
其中, \(H_0^{\ell}\) 表示例項 \(t\) 不應打上標籤 \(\ell\) , \(H_1^{\ell}\) 則表示應被打上;E_{C_t(\ell)}^{\ell}表示例項 \(t\) 的K近鄰中擁有標籤 \(\ell\) 的例項數為 \(C_t(\ell)\) 。上述式子可有貝葉斯定理求解:
\[ y_t(\ell) = \mathop{ \arg \max}_{b \in \{0,1\}} P(H_b^{\ell}) P(E_{C_t(\ell)}^{\ell} | H_b^{\ell} ) \]
上面兩項計算細節見論文[2].
2. 實驗
AdaBoost.MH演算法Spark實現見 ofollow,noindex" target="_blank">sparkboost , scikit-multilearn 實現ML-KNN演算法。我在 siam-competition2007 資料集上做了幾個演算法的對比實驗,結果如下:
演算法 | Hamming loss | Precision | Recall | F1 Measure |
---|---|---|---|---|
LR+OvR | 0.0569 | 0.6252 | 0.5586 | 0.5563 |
AdaBoost.MH | 0.0587 | 0.6280 | 0.6082 | 0.5837 |
ML-KNN | 0.0652 | 0.6204 | 0.6535 | 0.5977 |
此外, Mulan 提供了眾多資料集,Kaggle也有多標籤分類的比賽 WISE 2014 。
實驗部分程式碼如下:
import numpy as np from sklearn import metrics from sklearn.datasets import load_svmlight_file from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier from sklearn.preprocessing import MultiLabelBinarizer # load svm file X_train, y_train = load_svmlight_file('tmc2007_train.svm', dtype=np.float64, multilabel=True) X_test, y_test = load_svmlight_file('tmc2007_test.svm', dtype=np.float64, multilabel=True) # convert multi labels to binary matrix mb = MultiLabelBinarizer() y_train = mb.fit_transform(y_train) y_test = mb.fit_transform(y_test) # LR + OvR clf = OneVsRestClassifier(LogisticRegression(), n_jobs=10) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) # multilabel classification metrics loss = metrics.hamming_loss(y_test, y_pred) prf = metrics.precision_recall_fscore_support(y_test, y_pred, average='samples') """ ML-KNN for multilabel classification """ from skmultilearn.adapt import MLkNN clf = MLkNN(k=15) clf.fit(X_train, y_train) y_pred = clf.predict(X_test)
// AdaBoost.MH for multilabel classification val labels0Based = true val binaryProblem = false val learner = new AdaBoostMHLearner(sc) learner.setNumIterations(params.numIterations) // 500 iter learner.setNumDocumentsPartitions(params.numDocumentsPartitions) learner.setNumFeaturesPartitions(params.numFeaturesPartitions) learner.setNumLabelsPartitions(params.numLabelsPartitions) val classifier = learner.buildModel(params.input, labels0Based, binaryProblem) val testPath = "./tmc2007_test.svm" val numRows = DataUtils.getNumRowsFromLibSvmFile(sc, testPath) val testRdd = DataUtils.loadLibSvmFileFormatDataAsList(sc, testPath, labels0Based, binaryProblem, 0, numRows, -1); val results = classifier.classifyWithResults(sc, testRdd, 20) val predAndLabels = sc.parallelize(predLabels.zip(goldLabels) .map(t => { (t._1.map(e => e.toDouble), t._2.map(e => e.toDouble)) })) val metrics = new MultilabelMetrics(predAndLabels)
3. 參考文獻
[1] Schapire, Robert E., and Yoram Singer. "BoosTexter: A boosting-based system for text categorization." Machine learning 39.2-3 (2000): 135-168.
[2] Zhang, Min-Ling, and Zhi-Hua Zhou. "ML-KNN: A lazy learning approach to multi-label learning." Pattern recognition 40.7 (2007): 2038-2048.
[3] 基於PredictionIO的推薦引擎打造,及大規模多標籤分類探索 .