This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
gnd:ann [2007/05/28 00:29] gnd |
gnd:ann [2007/05/28 04:20] gnd |
||
---|---|---|---|
Line 1: | Line 1: | ||
==== Uvod do Neuronovych Sieti ==== | ==== Uvod do Neuronovych Sieti ==== | ||
- | [[http:// | + | * [[http:// |
- | [[http:// | + | |
+ | * [[http:// | ||
+ | |||
+ | nedopracovane otazky su dosledok znizovania casoveho okna a zvysovania unavy.. nic moc overall. | ||
+ | |||
+ | |||
+ | ==== 9. Principal Component Analysis ==== | ||
+ | |||
+ | Vyratame si korelacnu maticu a pomocou nej ziskame hlavne vektory a hlavne komponenty pre dane data. toto mozem pouzit na dekorelaciu dat alebo redukciu dimenzie, alebo rekonstrukciu poskodenych dat, alebo kopresiu. Viacej napriklad tu: http:// | ||
Line 16: | Line 24: | ||
blablabla vieme.. | blablabla vieme.. | ||
+ | |||
Line 23: | Line 32: | ||
=== Vektorová kvantizácia=== | === Vektorová kvantizácia=== | ||
- | citat z wolframu: | + | **Learning vector Quantization** |
+ | |||
+ | Pre data sa vyberie M prototypov - kedze ma byt M tried na klasifikaciu. Tieto prototypy su reprezentovane neuronmi a vahami ktore k nim zo vstupnych dat vedu. Nainicializuju sa napriklad tak ze kazdemu neuronu sa nastavia vahy na nejaky vektor zo vstupnych dat. Potom sa robi normalne kompetitivne ucenie, kde sa vyberie nahodny vektor x z mnoziny vstupnych dat a hlada sa prototyp (neuron), ktory ma najbzlizsiu euklidovsku vzdialenost ku x. Ten ktory vyhra je potom updatnuty takto: \\ | ||
+ | \\ | ||
+ | w(new) = w(old) + m( x - w(old) ), kde m je pocet prototypov. | ||
+ | \\ | ||
+ | Potom co su neurony natrenovane, | ||
+ | |||
+ | === topografické zobrazenie príznakov === | ||
+ | |||
+ | Kedze v SOM-ke vyhravaju s vitaznym neuronom aj jeho topologicky susedia, nastava zhromazdovanie podobnych klastrov vedla seba. | ||
+ | |||
+ | === redukcia dimenzie === | ||
+ | |||
+ | blabla.. ak mame 2d SOM tak data sa zredukuju do 2d priestoru. | ||
- | Another neural network type that has some similarities to the unsupervised one is the Vector Quantization (VQ) network, whose intended use is classification. Like unsupervised networks, the VQ network is based on a set of codebook vectors. Each class has a subset of the codebook vectors associated to it, and a data vector is classified to be in the class to which the closest codebook vector belongs. In the neural network literature, the codebook vectors are often called the neurons of the VQ network. | + | === magnifikačný faktor === |
- | Each of the codebook vectors has a part of the space "belonging" | + | vela podobnych dat a malo inych, ked napriklad je narusena gausovsa distribucia vstupov, sposobi ze siet sa lepsie nauci rozoznavat tieto data ktore jej boli viacej prezentovane a vo vyslednej mape zaberaju vacsi priestor. |
- | The positions of the codebook vectors are obtained with a supervised training algorithm, and you have two different ones to choose from. The default one is called Learning Vector Quantization (LVQ) and it adjusts the positions of the codebook vectors using both the correct and incorrect classified data. The second training algorithm is the competitive training algorithm, which is also used for unsupervised networks. For VQ networks this training algorithm can be used by considering the data and the codebook vectors of a specific class independently of the rest of the data and the rest of the codebook vectors. In contrast to the unsupervised networks, the output data indicating the correct class is also necessary. They are used to divide the input data among the different classes. | + | === náčrt matematických problémov analýzy algoritmu === |
+ | netusim.. | ||
==== 12. Hybridne model NS: Radial Basis Functions ==== | ==== 12. Hybridne model NS: Radial Basis Functions ==== |