System’s structure of the determining computer’s malfunctions

Each time the computer
is turned on, the POST diagnostic program that was written into the BIOS chip is
automatically started. The
POST system checks the operability of all the most important components of the
computer: processor, RAM, disk subsystem, system logic (chipset) and all
devices on which the normal functioning of the computer depends. Information
about the results of diagnostics can be given in three ways:

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Sound signals. Each fault corresponds to a series of audio signals that
POST generates during device testing. This method is the main one and the user
need to orient yourself on it. With the help of sound signals, the system
notifies about errors most often.

Text messages. This way POST is used in addition to the audio signals, if
the video system of the computer is in working condition. A message appears
briefly describing the fault and an error code on the screen. The code can be
studied in more detail by using the documentation for the motherboard or the
BIOS. With the help of text messages, the computer only informs about minor

Hexadecimal codes to a specific port at a specific address. Regardless of
whether audio or text messages are given, the system uses this method. However,
to read hexadecimal codes, you need to have special equipment – a POST card.

If the computer is
working properly and POST testing has completed successfully, you will hear one
short beep, after which the operating system will start to boot. If any
malfunction is detected, the diagnostic program will give a special sound
signal (a sequence of short and long beeps) characterizing the detected error, and
text message if it appears on the screen and the computer will stop working
until the malfunction is eliminated.

Comparison of existing analyzers and identifying conditions for improvement

The main
programs which were found and analyzed during the explanatory work have some
common functions and some differences.  There
were applications such as «??????? BIOS», «Pitidos del BIOS», «BIOS POST codes», «BiosBeepCodes», «Beep
Sounds», «Postcode», «POST-???? BIOS» and others. During analyzing application such as «??????? BIOS» it was determined that
this program contains:

short information how to find out in the computer which kind of BIOS is

3 types of BIOS models beep signals

language – English


The program «Pitidos del BIOS» contains:

– 6 types of BIOS signal models

– common text messages errors

– standard sound signals errors

– interface language – Spanish

– advertisement

Discover the meaning of the beeps that emit the bios of your computer and
the meaning of the most common screen errors that prevent the computer from
starting normally.

The POST card is an expansion board that has its own digital indicator and
outputs the initialization codes of the motherboard to it. With the latest code
output, you can determine which component has a malfunction. These codes depend
on the BIOS manufacturer of the motherboard. In the absence of errors and
normal passing of the test, POST gives to its indicator a value that does not
change during the operation of the computer, depending on the BIOS version, for
example, on most cards, at the end of the initialization, the code FF is

The program «BIOS POST codes» contains:

– POST codes

– BIOS error sound signals

– 4 types of BIOS models

– Error descriptions

– filter in searching sound signals and universal password

– description of LED post card indicators

-universal passwords for restoring BIOS

– interface language – English

Beeps The BIOS compiles the error code from the most common computer BIOS
manufacturers so you can fix some hardware errors. Because there are many BIOS
brands, there are no standard signal codes for each BIOS. This app is useful
when you need a quick reference guide.

19 BIOS signal models

Searching system using dropdown list in BIOS manufacturer

Error messages

Error descriptions


Interface language – English

Comparison of existing sounds recognitions


Instantly identify music playing around you – for free. See album art for
your favorite artists the way they intended it, in all its glory. Add a note to
yourself so you can remember WHY and WHERE you identified that amazing song.
Like that tune you heard in the local coffee shop? Wanna know more about that
hot new artist? Tap and see more of their music, view similar artists, and


• Quickly identify music playing around you

• Add a note to yourself to remember WHY and WHERE you made your ID

• See similar songs to your favorite artists

• View compelling artist pages

• See movie and TV information about the artist

• See biographical data about the artist


Shazam uses a smartphone or computer’s built-in microphone to gather a
brief sample of audio being played. It creates an acoustic fingerprint based on
the sample and compares it against a central database for a match. If it finds
a match, it sends information such as the artist, song title, and album back to
the user. Some implementations of Shazam incorporate relevant links to services
such as iTunes, Spotify, YouTube, or Groove Music.

Shazam works by analysing the captured sound and seeking a match based on
an acoustic fingerprint in a database of more than 11 million songs.

Shazam identifies songs based on an audio fingerprint based on a
time-frequency graph called a spectrogram.

Shazam stores a catalogue of audio fingerprints in a database. The user
tags a song for 10 seconds and the application creates an audio fingerprint.

Once it creates the fingerprint of the audio, Shazam starts the search for
matches in the database. If there is a match, it returns the information to the
user; otherwise it returns a “song not known” dialogue.

Shazam can identify prerecorded music being broadcast from any source, such
as a radio, television, cinema or music in a club, provided that the background
noise level is not high enough to prevent an acoustic fingerprint being taken,
and that the song is present in the software’s database.


SoundHound is the free music
discovery app that can listen and identify what’s playing. The music player
gives you full-length songs and videos with real-time song lyrics.

SoundHound includes the
fastest music recognition app that can name tunes from speakers as little as
four seconds. In addition, it also features Geotagging options, Facebook and
Twitter sharing, previews, purchase links, full length YouTube videos and
favorite artists’ top songs and info.

Unlike Shazam, it will hazard
a guess at just about any tune thrown at it, whether it human or
speaker-created, and it’ll succeed most of the time.

There is banner advertising
and is the song ID limit. You get five IDs per month, after which you either
have to purchase five more IDs for $1, or upgrade to SoundHound proper for $5,
which eliminates ads altogether.

Sound Search for Google Play

Sound Search for Google Play – ??? ??????????? ?????????? ?? Google, ??????? ??????? ??? ???????? ????? ????????????? ? ????????? ??????
??????????, ????? ??? ????? ??????????, ??? Soundhound ??? Shazam.

Sound Search for Google Play ????? ????? ? ?????????????. ?? ?????????? ?????? ?? ??????? ???? ??????
?????????? Android, ? ?????? ???, ????? ?? ??????? ?????????????
???????, ??????? ?? ??????, ??? ????? ????? ?????? ?????? ?? ?????? ??????????.
?? ????????? ??????? Sound Search for Google Play ??????? ??? ???????? ????? ?
???????????, ? ????? ???????? ???????. ????? ?????, ?????????? ?????????????
?????? ? Google Store, ??? ?? ?????? ??????? ???
?????????? ??? ? ?????? ??????????.

Sound Search for Google Play – ??? ????? ???????? ??????????. ? ???? ??? ?? ????????????? ???????
????????? ????????????, ??????? ??? ???????, ????????, Shazam ??? Soundhound, ??? ???????? ???????? ? ??????? ???????????, ???????
????? ?? ????? ???? ????????????.

Classifications and methods of sounds recognizing systems

Sound or speech recognition
systems are classified:

•         by the size of the dictionary (a limited set of
words/sounds, a large dictionary);

•         depending on the speaker (speaker-independent and
speaker-independent systems);

•         by type of speech/sound (joint or separate speech/sounds);

•         by appointment (dictation systems, command systems);

•         on the algorithm used (neural networks, hidden Markov
models, dynamic programming);

•         by the size of the dictionary (a limited set of words, a
large dictionary);

•         depending on the speaker (speaker-independent and
speaker-independent systems);

•         by type of speech (joint or separate speech);

•         by appointment (dictation systems, command systems);

•         on the algorithm used (neural networks, hidden Markov
models, dynamic programming);

•         by type of structural unit (phrases, words, sounds,
phonemes, dyphones, allophones);

•         by the principle of selection of structural units (pattern
recognition, selection of lexical elements).

Classification of methods of
speech recognition on the basis of comparison with the standard.

•         Dynamic Time Warping

•         Bayesian discrimination

•         Hidden Markov Model

•         Neural networks

In time series analysis,
dynamic time warping (DTW) is one of the algorithms for measuring similarity
between two temporal sequences, which may vary in speed. For instance,
similarities in walking could be detected using DTW, even if one person was
walking faster than the other, or if there were accelerations and decelerations
during the course of an observation. DTW has been applied to temporal sequences
of video, audio, and graphics data — indeed, any data that can be turned into a
linear sequence can be analyzed with DTW. A well known application has been
automatic speech recognition, to cope with different speaking speeds. Other
applications include speaker recognition and online signature recognition. Also
it is seen that it can be used in partial shape matching application.

In general, DTW is a method
that calculates an optimal match between two given sequences (e.g. time series)
with certain restrictions. The sequences are “warped” non-linearly in
the time dimension to determine a measure of their similarity independent of
certain non-linear variations in the time dimension. This sequence alignment
method is often used in time series classification. Although DTW measures a
distance-like quantity between two given sequences, it doesn’t guarantee the
triangle inequality to hold.

In addition to a similarity
measure between the two sequences, a so called “warping path” is
produced, by warping according to this path the two signals may be aligned in
time. The signal with an original set of points X(original), Y(original) is
transformed to X(warped), Y(original). This finds applications in genetic
sequence and audio synchronisation.

As an example, there are two
time series Q and C, of length n and m respectively, where:

Q = q1,q2,…,qi,…,qn (1)

C = c1,c2,…,cj,…,cm (2)

To align two sequences using
DTW we construct an n-by-m matrix where the (ith,jth) element of the matrix
contains the distance d(qi,cj) between the two points qi and cj (With Euclidean
distance, d(qi,cj) = (qi – cj)2). Each matrix element (i,j) corresponds to the
alignment between the points qi and cj. This is illustrated in Figure 4. A
warping path W, is a contiguous (in the sense stated below) set of matrix
elements that defines a mapping between Q and C. The kth element of W is
defined as wk = (i,j)k so it will be:

W = w1, w2, …,wk,…,wK max(m,n)
? K < m+n-1 The warping path is typically subject to several constraints. •         Boundary Conditions: w1 = (1,1) and wK = (m,n), simply stated, this requires the warping path to start and finish in diagonally opposite corner cells of the matrix. •         Continuity: Given wk = (a,b) then wk-1 = (a',b') where a–a' ?1 and b-b' ? 1. This restricts the allowable steps in the warping path to adjacent cells (including diagonally adjacent cells). •         Monotonicity: Given wk = (a,b) then wk-1 = (a',b') where a–a' ? 0 and b-b' ? 0. This forces the points in W to be monotonically spaced in time. There are exponentially many warping paths that satisfy the above conditions, however it is interested only in the path which minimizes the warping cost:  (4) The K in the denominator is used to compensate for the fact that warping paths may have different lengths.   This path can be found very efficiently using dynamic programming to evaluate the following recurrence which defines the cumulative distance ?(i,j) as the distance d(i,j) found in the current cell and the minimum of the cumulative distances of the adjacent elements: ?(i,j) = d(qi,cj) + min{ ?(i-1,j-1) , ?(i-1,j ) , ?(i,j-1) } (5) The Euclidean distance between two sequences can be seen as a special case of DTW where the kth element of W is constrained such that wk = (i,j)k , i = j = k. It is only defined in the special case where the two sequences have the same length. The time complexity of DTW is O(nm). However this is just for comparing two sequences. In data mining applications it will one of the following two situations: 1) Whole Matching: a query sequence Q, and X sequences of approximately have the same length in the database. It need to find the sequence that is most similar to Q. 2) Subsequence Matching: a query sequence Q, and a much longer sequence R of length X are in the database. It need to find the subsection of R that is most similar to Q. To find the best match it need "slide" the query along R, testing every possible subsection of R. In either case the time complexity is O(n2X), which is intractable for many real-world problems. Hidden Markov Model (HMM) is statistical model that output sequence of symbols or quantities. HMM can be trained automatically, simple and feasible to use. In speech recognition, the hidden Markov model would output a sequence of n-dimensional real-valued vectors. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of a short time window of speech and decorrelating the spectrum using a cosine transform. At each time-step, the system makes a transition from the state it is in to another state, and the emits an observable quantity according to a state-specific probability distribution. More precisely, a hidden Markov model is defined by the following things: 1. A set of possible states Q = Si qi. 2. A state transition matrix A where aij is the probability of making a transition from state qi to state qj . 3. A prior distribution over the state of the system at an initial point in time. 4. A state-conditioned probability distribution over observations. That is, a specification of P (ojqi) for every state and all possible observations. The observation sequence modeled by the HMM may be either discrete or continuous in nature, but because of the transition matrix, the state space is required to be discrete. Hidden Markov models have been used in a wide variety of application fields, with great success. Examples include gene prediction, protein secondary-structure prediction, handwriting recognition, and speech recognition. The use of HMMs is well-illustrated by a (simplified) example from computational biology: the problem of predicting whether a region of DNA codes for a gene. The DNA in the chromosome of a higher animal falls into one of two categories: it either codes for a protein, and can be used by a cell as a template for constructing that protein, or it is extraneous with respect to protein coding. The former regions are referred to as exons, and the latter as introns. Introns are spliced out" of a DNA strand in the process of transcription. The ability to recognize exons is significant to biologists because it allows them to identify and study regions of biological significance. An HMM can be used to model this distinction by assuming that the DNA sequence is generated by a system that essentially acts like a typist. The system can either be in the state of «typing out» a gene, or of «typing out» a non-coding region. When in the gene-producing state, base pairs from the set {A; C; T; G} are emitted with characteristic frequencies. When in the intron state, the characteristic frequencies are different. The HMM is "trained" to learn these characteristic frequencies, and the probability of switching from one region to another, with examples of DNA where the coding and non-coding regions are known. Using this information, the HMM can find the likeliest partitioning of an unknown sequence into coding and noncoding regions. The HMM methodology has been quite successful, and this is indicated by a large number of variations that have been explored. One approach, used by researchers at IBM, is to associate output distributions with transitions, rather than states. Ostensibly, this has the effect of squaring the number of output distributions; in fact, the two approaches are formally equivalent. The assumption of time-invariant transition probabilities implies an exponentially decreasing a-priori distribution over durations, but in cases where this is undesirable, it is possible to explicitly model the state durations.   Another important variation deals with the modeling of autoregressive observation sequences. The assumption behind autoregressive HMMs (Poritz 1982) is that it is reasonable to model the output yt at time t as a linear combination of the immediately preceding values. The precise assumption is that the observation stream is real-valued, and yt = Pk 1 ak ytk + ut. The term ut represents a normally distributed error term, and the ai are autoregressive coefficients. Essentially, this model tries to predict the current observation from the past k observations. Since the errors are assumed to be normally distributed with some standard deviation  , the probability of a particular error can be computed as p12  exp(ut=2 2). The errors are also assumed to be independent and identically distributed, so that the probability of a sequence of observations can be computed from the product of their individual probabilities. The idea behind an autoregressive HMM is to associate a set of predictor coefficients with each state, and compute the observation probability from the prediction errors. the algorithms that are available for use with HMMs. Denote a fixed length observation sequence by o = (o1; o2; : : : ; on) and a corresponding state sequence by q = (q1; q2; : : : ; qn). An HMM defines a joint probability distribution over observation sequences as follows: The value of P (qi jqi1) is specified in the state transition matrix, and the value of P (oi jqi) is specified by the observation distributions associated with the HMM. We denote the assertion that the state of the system at time t was qi by Qt = qi. There are efficient algorithms for computing the following quantities: Since the algorithms themselves are well known, we do not present them here, and note only that the running time is in all cases proportional to njQj2. Conclusion


I'm Erica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out